Publications

Displaying 401 - 470 of 470
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.
  • Skiba, R. (1991). Eine Datenbank für Deutsch als Zweitsprache Materialien: Zum Einsatz von PC-Software bei Planung von Zweitsprachenunterricht. In H. Barkowski, & G. Hoff (Eds.), Berlin interkulturell: Ergebnisse einer Berliner Konferenz zu Migration und Pädagogik. (pp. 131-140). Berlin: Colloquium.
  • Skoruppa, K., Cristia, A., Peperkamp, S., & Seidl, A. (2011). English-learning infants' perception of word stress patterns [JASA Express Letter]. Journal of the Acoustical Society of America, 130(1), EL50-EL55. doi:10.1121/1.3590169.

    Abstract

    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life.
  • Slobin, D. I., Bowerman, M., Brown, P., Eisenbeiss, S., & Narasimhan, B. (2011). Putting things in places: Developmental consequences of linguistic typology. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 134-165). New York: Cambridge University Press.

    Abstract

    The concept of 'event' has been posited as an ontological primitive in natural language semantics, yet relatively little research has explored patterns of event encoding. Our study explored how adults and children describe placement events (e.g., putting a book on a table) in a range of different languages (Finnish, English, German, Russian, Hindi, Tzeltal Maya, Spanish, and Turkish). Results show that the eight languages grammatically encode placement events in two main ways (Talmy, 1985, 1991), but further investigation reveals fine-grained crosslinguistic variation within each of the two groups. Children are sensitive to these finer-grained characteristics of the input language at an early age, but only when such features are perceptually salient. Our study demonstrates that a unitary notion of 'event' does not suffice to characterize complex but systematic patterns of event encoding crosslinguistically, and that children are sensitive to multiple influences, including the distributional properties of the target language in constructing these patterns in their own speech.
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • De Smedt, K., & Kempen, G. (1991). Segment Grammar: A formalism for incremental sentence generation. In C. Paris, W. Swartout, & W. Mann (Eds.), Natural language generation and computational linguistics (pp. 329-349). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • De Sousa, H. (2011). Changes in the language of perception in Cantonese. The Senses & Society, 6(1), 38-47. doi:10.2752/174589311X12893982233678.

    Abstract

    The way a language encodes sensory experiences changes over time, and often this correlates with other changes in the society. There are noticeable differences in the language of perception between older and younger speakers of Cantonese in Hong Kong and Macau. Younger speakers make finer distinctions in the distal senses, but have less knowledge of the finer categories of the proximal senses than older speakers. The difference in the language of perception between older and younger speakers probably reflects the rapid changes that happened in Hong Kong and Macau in the last fifty years, from an underdeveloped and lessliterate society, to a developed and highly literate society. In addition to the increase in literacy, the education system has also undergone significant Westernization. Western-style education systems have most likely created finer categorizations in the distal senses. At the same time, the traditional finer distinctions of the proximal senses have become less salient: as the society became more urbanized and sanitized, people have had fewer opportunities to experience the variety of olfactory sensations experienced by their ancestors. This case study investigating interactions between social-economic 'development' and the elaboration of the senses hopefully contributes to the study of the ineffability of senses.
  • Stivers, T. (2011). Morality and question design: 'Of course' as contesting a presupposition of askability. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 82-106). Cambridge: Cambridge University Press.
  • Stivers, T., Mondada, L., & Steensig, J. (2011). Knowledge, morality and affiliation in social interaction. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 3-26). Cambridge: Cambridge University Press.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Terrill, A. (2011). Languages in contact: An exploration of stability and change in the Solomon Islands. Oceanic Linguistics, 50(2), 312-337.

    Abstract

    The Papuan-Oceanic world has long been considered a hotbed of contact-induced linguistic change, and there have been a number of studies of deep linguistic influence between Papuan and Oceanic languages (like those by Thurston and Ross). This paper assesses the degree and type of contact-induced language change in the Solomon Islands, between the four Papuan languages—Bilua (spoken on Vella Lavella, Western Province), Touo (spoken on southern Rendova, Western Province), Savosavo (spoken on Savo Island, Central Province), and Lavukaleve (spoken in the Russell Islands, Central Province)—and their Oceanic neighbors. First, a claim is made for a degree of cultural homogeneity for Papuan and Oceanic-speaking populations within the Solomons. Second, lexical and grammatical borrowing are considered in turn, in an attempt to identify which elements in each of the four Papuan languages may have an origin in Oceanic languages—and indeed which elements in Oceanic languages may have their origin in Papuan languages. Finally, an assessment is made of the degrees of stability versus change in the Papuan and Oceanic languages of the Solomon Islands.
  • Terrill, A. (2011). Limits of the substrate: Substrate grammatical influence in Solomon Islands Pijin. In C. Lefebvre (Ed.), Creoles, their substrates, and language typology (pp. 513-529). Amsterdam: John Benjamins.

    Abstract

    What grammatical elements of a substrate language find their way into a creole? Grammatical features of the Oceanic substrate languages have been shown to be crucial in the development of Solomon Islands Pijin and of Melanesian Pidgin as a whole (Keesing 1988), so one might expect constructions which are very stable in the Oceanic family of languages to show up as substrate influence in the creole. This paper investigates three constructions in Oceanic languages which have been stable over thousands of years and persist throughout a majority of the Oceanic languages spoken in the Solomon Islands. The paper asks whether these are the sorts of constructions which could be expected to be reflected in Solomon Islands Pijin and shows that none of these persistent constructions appears in Solomon Islands Pijin at all. The absence of these constructions in Solomon Islands Pijin could be due to simplification: Creole genesis involves simplification of the substrate grammars. However, while simplification could be the explanation, it is not necessarily the case that all complex structures become simplified. For instance Solomon Islands Pijin pronoun paradigms are more complex than those in English, but the complexity is similar to that of the substrate languages. Thus it is not the case that all areas of a creole language are necessarily simplified. One must therefore look further than just simplification for an explanation of the presence or absence of stable grammatical features deriving from the substrate in creole languages. An account based on constraints in specific domains (Siegel 1999) is a better predictor of the behaviour of substrate constructions in Solomon Islands Pijin.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material
  • Torreira, F., & Ernestus, M. (2011). Realization of voiceless stops and vowels in conversational French and Spanish. Laboratory Phonology, 2(2), 331-353. doi:10.1515/LABPHON.2011.012.

    Abstract

    The present study compares the realization of intervocalic voiceless stops and vowels surrounded by voiceless stops in conversational Spanish and French. Our data reveal significant differences in how these segments are realized in each language. Spanish voiceless stops tend to have shorter stop closures, display incomplete closures more often, and exhibit more voicing than French voiceless stops. As for vowels, more cases of complete devoicing and greater degrees of partial devoicing were found in French than in Spanish. Moreover, all French vowel types exhibit significantly lower F1 values than their Spanish counterparts. These findings indicate that the extent of reduction that a segment type can undergo in conversational speech can vary significantly across languages. Language differences in coarticulatory strategies and “base-of-articulation” are discussed as possible causes of our observations.
  • Torreira, F., & Ernestus, M. (2011). Vowel elision in casual French: The case of vowel /e/ in the word c’était. Journal of Phonetics, 39(1), 50 -58. doi:10.1016/j.wocn.2010.11.003.

    Abstract

    This study investigates the reduction of vowel /e/ in the French word c’était /setε/ ‘it was’. This reduction phenomenon appeared to be highly frequent, as more than half of the occurrences of this word in a corpus of casual French contained few or no acoustic traces of a vowel between [s] and [t]. All our durational analyses clearly supported a categorical absence of vowel /e/ in a subset of c’était tokens. This interpretation was also supported by our finding that the occurrence of complete elision and [e] duration in non-elision tokens were conditioned by different factors. However, spectral measures were consistent with the possibility that a highly reduced /e/ vowel is still present in elision tokens in spite of the durational evidence for categorical elision. We discuss how these findings can be reconciled, and conclude that acoustic analysis of uncontrolled materials can provide valuable information about the mechanisms underlying reduction phenomena in casual speech.
  • Tufvesson, S. (2011). Analogy-making in the Semai sensory world. The Senses & Society, 6(1), 86-95. doi:10.2752/174589311X12893982233876.

    Abstract

    In the interplay between language, culture, and perception, iconicity structures our representations of what we experience. By examining secondary iconicity in sensory vocabulary, this study draws attention to diagrammatic qualities in human interaction with, and representation of, the sensory world. In Semai (Mon-Khmer, Aslian), spoken on Peninsular Malaysia, sensory experiences are encoded by expressives. Expressives display a diagrammatic iconic structure whereby related sensory experiences receive related linguistic forms. Through this type of formmeaning mapping, gradient relationships in the perceptual world receive gradient linguistic representations. Form-meaning mapping such as this enables speakers to categorize sensory events into types and subtypes of perceptions, and provide illustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape. sensory specifics of various kinds. This studyillustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape.
  • Tuinman, A., & Cutler, A. (2011). L1 knowledge and the perception of casual speech processes in L2. In M. Wrembel, M. Kul, & K. Dziubalska-Kolaczyk (Eds.), Achievements and perspectives in SLA of speech: New Sounds 2010. Volume I (pp. 289-301). Frankfurt am Main: Peter Lang.

    Abstract

    Every language manifests casual speech processes, and hence every second language too. This study examined how listeners deal with second-language casual speech processes, as a function of the processes in their native language. We compared a match case, where a second-language process t/-reduction) is also operative in native speech, with a mismatch case, where a second-language process (/r/-insertion) is absent from native speech. In each case native and non-native listeners judged stimuli in which a given phoneme (in sentence context) varied along a continuum from absent to present. Second-language listeners in general mimicked native performance in the match case, but deviated significantly from native performance in the mismatch case. Together these results make it clear that the mapping from first to second language is as important in the interpretation of casual speech processes as in other dimensions of speech perception. Unfamiliar casual speech processes are difficult to adapt to in a second language. Casual speech processes that are already familiar from native speech, however, are easy to adapt to; indeed, our results even suggest that it is possible for subtle difference in their occurrence patterns across the two languages to be detected,and to be accommodated to in second-language listening
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.

    Abstract

    In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation.
  • De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.

    Abstract

    This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Berkum, J. J. A., Hijne, H., De Jong, T., Van Joolingen, W. R., & Njoo, M. (1991). Aspects of computer simulations in education. Education & Computing, 6(3/4), 231-239.

    Abstract

    Computer simulations in an instructional context can be characterized according to four aspects (themes): simulation models, learning goals, learning processes and learner activity. The present paper provides an outline of these four themes. The main classification criterion for simulation models is quantitative vs. qualitative models. For quantitative models a further subdivision can be made by classifying the independent and dependent variables as continuous or discrete. A second criterion is whether one of the independent variables is time, thus distinguishing dynamic and static models. Qualitative models on the other hand use propositions about non-quantitative properties of a system or they describe quantitative aspects in a qualitative way. Related to the underlying model is the interaction with it. When this interaction has a normative counterpart in the real world we call it a procedure. The second theme of learning with computer simulation concerns learning goals. A learning goal is principally classified along three dimensions, which specify different aspects of the knowledge involved. The first dimension, knowledge category, indicates that a learning goal can address principles, concepts and/or facts (conceptual knowledge) or procedures (performance sequences). The second dimension, knowledge representation, captures the fact that knowledge can be represented in a more declarative (articulate, explicit), or in a more compiled (implicit) format, each one having its own advantages and drawbacks. The third dimension, knowledge scope, involves the learning goal's relation with the simulation domain; knowledge can be specific to a particular domain, or generalizable over classes of domains (generic). A more or less separate type of learning goal refers to knowledge acquisition skills that are pertinent to learning in an exploratory environment. Learning processes constitute the third theme. Learning processes are defined as cognitive actions of the learner. Learning processes can be classified using a multilevel scheme. The first (highest) of these levels gives four main categories: orientation, hypothesis generation, testing and evaluation. Examples of more specific processes are model exploration and output interpretation. The fourth theme of learning with computer simulations is learner activity. Learner activity is defined as the ‘physical’ interaction of the learner with the simulations (as opposed to the mental interaction that was described in the learning processes). Five main categories of learner activity are distinguished: defining experimental settings (variables, parameters etc.), interaction process choices (deciding a next step), collecting data, choice of data presentation and metacontrol over the simulation.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van de Meerendonk, N., Indefrey, P., Chwilla, D. J., & Kolk, H. H. (2011). Monitoring in language perception: Electrophysiological and hemodynamic responses to spelling violations. Neuroimage, 54, 2350-2363. doi:10.1016/j.neuroimage.2010.10.022.

    Abstract

    The monitoring theory of language perception proposes that competing representations that are caused by strong expectancy violations can trigger a conflict which elicits reprocessing of the input to check for possible processing errors. This monitoring process is thought to be reflected by the P600 component in the EEG. The present study further investigated this monitoring process by comparing syntactic and spelling violations in an EEG and an fMRI experiment. To assess the effect of conflict strength, misspellings were embedded in sentences that were weakly or strongly predictive of a critical word. In support of the monitoring theory, syntactic and spelling violations elicited similarly distributed P600 effects. Furthermore, the P600 effect was larger to misspellings in the strongly compared to the weakly predictive sentences. The fMRI results showed that both syntactic and spelling violations increased activation in the left inferior frontal gyrus (lIFG), while only the misspellings activated additional areas. Conflict strength did not affect the hemodynamic response to spelling violations. These results extend the idea that the lIFG is involved in implementing cognitive control in the presence of representational conflicts in general to the processing of errors in language perception.
  • Van Gijn, R. (2011). Multi-verb constructions in Yurakaré. In A. Y. Aikhenvald, & P. C. Muysken (Eds.), Multi-verb constructions: A view from the Americas (pp. 255-282). Leiden: Brill.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van de Ven, M., & Gussenhoven, C. (2011). On the timing of the final rise in Dutch falling-rising intonation contours. Journal of Phonetics, 39, 225-236. doi:10.1016/j.wocn.2011.01.006.

    Abstract

    A corpus of Dutch falling-rising intonation contours with early nuclear accent was elicited from nine speakers with a view to establishing the extent to which the low F0 target immediately preceding the final rise, was attracted by a post-nuclear stressed syllable (PNS) in either of the last two words or by Second Occurrence Contrastive Focus (SOCF) on either of these words. We found a small effect of foot type, which we interpret as due to a rhythmic 'trochaic enhancement' effect. The results show that neither PNS nor SOCF influences the location of the low F0 target, which appears consistently to be timed with reference to the utterance end. It is speculated that there are two ways in which postnuclear tones can be timed. The first is by means of a phonological association with a post-nuclear stressed syllable, as in Athenian Greek and Roermond Dutch. The second is by a fixed distance from the utterance end or from the target of an adjacent tone. Accordingly, two phonological mechanisms are defended, association and edge alignment, such that all tones edge-align, but only some associate. Specifically, no evidence was found for a third situation that can be envisaged, in which a post-nuclear tone is gradiently attracted to a post-nuclear stress.

    Files private

    Request files
  • Van Gijn, R. (2011). Pronominal affixes, the best of both worlds: The case of Yurakaré. Transactions of the Philological Society, 109(1), 41-58. doi:10.1111/j.1467-968X.2011.01249.x.

    Abstract

    I thank the speakers of Yurakaré who have taught me their language for sharing their knowledge with me. I would furthermore like to thank Grev Corbett, Michael Cysouw, and an anonymous reviewer for commenting on earlier drafts of this paper. All remaining errors are mine. The research reported in this paper was made possible by grants from Prof. Pieter Muysken’s Spinoza project Lexicon & Syntax, the University of Surrey, the DoBeS foundation, and the Netherlands Organization for Scientific Research, for which I am grateful. Pronominal affixes in polysynthetic languages have an ambiguous status in the sense that they have characteristics normally associated with free pronouns as well as characteristics associated with agreement markers. This situation arises because pronominal affixes represent intermediate stages in a diachronic development from independent pronouns to agreement markers. Because this diachronic change is not abrupt, pronominal affixes can show different characteristics from language to language. By presenting an in-depth discussion of the pronominal affixes of Yurakaré, an unclassified language from Bolivia, I argue that these so-called intermediate stages as typically attested in polysynthetic languages actually represent economical systems that combine advantages of agreement markers and of free pronouns. In terms of diachronic development, such ‘intermediate’ systems, being functionally well-adapted, appear to be rather stable, and it can even be reinforced by subsequent diachronic developments.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Gijn, R. (2011). Subjects and objects: A semantic account of Yurakaré argument structure. International Journal of American Linguistics, 77, 595-621. doi:10.1086/662158.

    Abstract

    Yurakaré (unclassified, central Bolivia) marks core arguments on the verb by means of pronominal affixes. Subjects are suffixed, objects are prefixed. There are six types of head-marked objects in Yurakaré, each with its own morphosyntactic and semantic properties. Distributional patterns suggest that the six objects can be divided into two larger groups reminiscent of the typologically recognized direct vs. indirect object distinction. This paper looks at the interaction of this complex system of participant marking and verbal semantics. By investigating the participant-marking patterns of nine verb classes (four representing a gradual decrease of patienthood of the P participant, five a gradual decrease of agentivity of the A participant), I come to the conclusion that grammatical roles in Yurakaré can be defined semantically, and case frames are to a high degree determined by verbal semantics.
  • Van Gijn, R., Haude, K., & Muysken, P. (2011). Subordination in South America: An overview. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 1-24). Amsterdam: Benjamins.
  • Van Leeuwen, E. J. C., Zimmerman, E., & Davila Ross, M. (2011). Responding to inequities: Gorillas try to maintain their competitive advantage during play fights. Biology Letters, 7(1), 39-42. doi:10.1098/rsbl.2010.0482.

    Abstract

    Humans respond to unfair situations in various ways. Experimental research has revealed that non-human species also respond to unequal situ- ations in the form of inequity aversions when they have the disadvantage. The current study focused on play fights in gorillas to explore for the first time, to our knowledge, if/how non-human species respond to inequities in natural social settings. Hitting causes a naturally occurring inequity among individuals and here it was specifically assessed how the hitters and their partners engaged in play chases that followed the hitting. The results of this work showed that the hitters significantly more often moved first to run away immediately after the encounter than their partners. These findings provide evidence that non-human species respond to inequities by trying to maintain their competitive advantages. We conclude that non-human pri- mates, like humans, may show different responses to inequities and that they may modify them depending on if they have the advantage or the disadvantage.
  • Van Gijn, R. (2011). Semantic and grammatical integration in Yurakaré subordination. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 169-192). Amsterdam: Benjamins.

    Abstract

    Yurakaré (unclassified, central Bolivia) has five subordination strategies (on the basis of a morphosyntactic definition). In this paper I argue that the use of these different strategies is conditioned by the degree of conceptual synthesis of the two events, relating to temporal integration and participant integration. The most integrated events are characterized by shared time reference; morphosyntactically they are serial verb constructions, with syntactically fused predicates. The other constructions are characterized by less grammatical integration, which correlates either with a low degree of temporal integration of the dependent predicate and the main predicate, or with participant discontinuity.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.

    Abstract

    Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.

    Abstract

    The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns.
  • Verdonschot, R. G., La Heij, W., Paolieri, D., Zhang, Q., & Schiller, N. O. (2011). Homophonic context effects when naming Japanese kanji: Evidence for processing costs. Quarterly Journal of Experimental Psychology, 64(9), 1836-1849. doi:10.1080/17470218.2011.585241.

    Abstract

    The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e. g., Glaser & Dungelhoff, 1984; Roelofs, 2003). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hanzi) words have been observed (Verdonschot, La Heij, & Schiller, 2010). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.
  • Verdonschot, R. G., Kiyama, S., Tamaoka, K., Kinoshita, S., La Heij, W., & Schiller, N. O. (2011). The functional unit of Japanese word naming: Evidence from masked priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1458-1473. doi:10.1037/a0024491.

    Abstract

    Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation paradigm. To shed more light on the functional unit of phonological encoding in Japanese, a language often described as being mora based, we report the results of 4 experiments using word reading tasks and masked priming. Experiment 1 demonstrated using Japanese kana script that primes, which overlapped in the whole mora with target words, sped up word reading latencies but not when just the onset overlapped. Experiments 2 and 3 investigated a possible role of script by using combinations of romaji (Romanized Japanese) and hiragana; again, facilitation effects were found only when the whole mora and not the onset segment overlapped. Experiment 4 distinguished mora priming from syllable priming and revealed that the mora priming effects obtained in the first 3 experiments are also obtained when a mora is part of a syllable. Again, no priming effect was found for single segments. Our findings suggest that the mora and not the segment (phoneme) is the basic functional phonological unit in Japanese language production planning.
  • Verhagen, J. (2011). Verb placement in second language acquisition: Experimental evidence for the different behavior of auxiliary and lexical verbs. Applied Psycholinguistics, 32, 821 -858. doi:10.1017/S0142716411000087.

    Abstract

    This study investigates the acquisition of verb placement by Moroccan and Turkish second language (L2) learners of Dutch. Elicited production data corroborate earlier findings from L2 German that learners who do not produce auxiliaries do not raise lexical verbs over negation, whereas learners who produce auxiliaries do. Data from elicited imitation and sentence matching support this pattern and show that learners can have grammatical knowledge of auxiliary placement before they can produce auxiliaries. With lexical verbs, they do not show such knowledge. These results present further evidence for the different behavior of auxiliary and lexical verbs in early stages of L2 acquisition.
  • Vernes, S. C., Oliver, P. L., Spiteri, E., Lockstone, H. E., Puliyadi, R., Taylor, J. M., Ho, J., Mombereau, C., Brewer, A., Lowy, E., Nicod, J., Groszer, M., Baban, D., Sahgal, N., Cazier, J.-B., Ragoussis, J., Davies, K. E., Geschwind, D. H., & Fisher, S. E. (2011). Foxp2 regulates gene networks implicated in neurite outgrowth in the developing brain. PLoS Genetics, 7(7): e1002145. doi:10.1371/journal.pgen.1002145.

    Abstract

    Forkhead-box protein P2 is a transcription factor that has been associated with intriguing aspects of cognitive function in humans, non-human mammals, and song-learning birds. Heterozygous mutations of the human FOXP2 gene cause a monogenic speech and language disorder. Reduced functional dosage of the mouse version (Foxp2) causes deficient cortico-striatal synaptic plasticity and impairs motor-skill learning. Moreover, the songbird orthologue appears critically important for vocal learning. Across diverse vertebrate species, this well-conserved transcription factor is highly expressed in the developing and adult central nervous system. Very little is known about the mechanisms regulated by Foxp2 during brain development. We used an integrated functional genomics strategy to robustly define Foxp2-dependent pathways, both direct and indirect targets, in the embryonic brain. Specifically, we performed genome-wide in vivo ChIP–chip screens for Foxp2-binding and thereby identified a set of 264 high-confidence neural targets under strict, empirically derived significance thresholds. The findings, coupled to expression profiling and in situ hybridization of brain tissue from wild-type and mutant mouse embryos, strongly highlighted gene networks linked to neurite development. We followed up our genomics data with functional experiments, showing that Foxp2 impacts on neurite outgrowth in primary neurons and in neuronal cell models. Our data indicate that Foxp2 modulates neuronal network formation, by directly and indirectly regulating mRNAs involved in the development and plasticity of neuronal connections
  • Vernes, S. C., & Fisher, S. E. (2011). Functional genomic dissection of speech and language disorders. In J. D. Clelland (Ed.), Genomics, proteomics, and the nervous system (pp. 253-278). New York: Springer.

    Abstract

    Mutations of the human FOXP2 gene have been shown to cause severe difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech (developmental verbal dyspraxia or DVD). Affected individuals are impaired in multiple aspects of expressive and receptive linguistic processing and ­display abnormal grey matter volume and functional activation patterns in cortical and subcortical brain regions. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerization. This chapter describes the successful use of FOXP2 as a unique molecular window into neurogenetic pathways that are important for speech and language development, adopting several complementary strategies. These include direct functional investigations of FOXP2 splice variants and the effects of etiological mutations. FOXP2’s role as a transcription factor also enabled the development of functional genomic routes for dissecting neurogenetic mechanisms that may be relevant for speech and language. By identifying downstream target genes regulated by FOXP2, it was possible to identify common regulatory themes in modulating synaptic plasticity, neurodevelopment, and axon guidance. These targets represent novel entrypoints into in vivo pathways that may be disturbed in speech and language disorders. The identification of FOXP2 target genes has also led to the discovery of a shared neurogenetic pathway between clinically distinct language disorders; the rare Mendelian form of DVD and a complex and more common form of language ­disorder known as Specific Language Impairment.

    Files private

    Request files
  • Virpioja, S., Lehtonen, M., Hulten, A., Salmelin, R., & Lagus, K. (2011). Predicting reaction times in word recognition by unsupervised learning of morphology. In W. Honkela, W. Dutch, M. Girolami, & S. Kaski (Eds.), Artificial Neural Networks and Machine Learning – ICANN 2011 (pp. 275-282). Berlin: Springer.

    Abstract

    A central question in the study of the mental lexicon is how morphologically complex words are processed. We consider this question from the viewpoint of statistical models of morphology. As an indicator of the mental processing cost in the brain, we use reaction times to words in a visual lexical decision task on Finnish nouns. Statistical correlation between a model and reaction times is employed as a goodness measure of the model. In particular, we study Morfessor, an unsupervised method for learning concatenative morphology. The results for a set of inflected and monomorphemic Finnish nouns reveal that the probabilities given by Morfessor, especially the Categories-MAP version, show considerably higher correlations to the reaction times than simple word statistics such as frequency, morphological family size, or length. These correlations are also higher than when any individual test subject is viewed as a model.
  • De Vos, C. (2011). A signers' village in Bali, Indonesia. Minpaku Anthropology Newsletter, 33, 4-5.
  • De Vos, C. (2011). Kata Kolok color terms and the emergence of lexical signs in rural signing communities. The Senses & Society, 6(1), 68-76. doi:10.2752/174589311X12893982233795.

    Abstract

    How do new languages develop systematic ways to talk about sensory experiences, such as color? To what extent is the evolution of color terms guided by societal factors? This paper describes the color lexicon of a rural sign language called Kata Kolok which emerged approximately one century ago in a Balinese village. Kata Kolok has four color signs: black, white, red and a blue-green term. In addition, two non-conventionalized means are used to provide color descriptions: naming relevant objects, and pointing to objects in the vicinity. Comparison with Balinese culture and spoken Balinese brings to light discrepancies between the systems, suggesting that neither cultural practices nor language contact have driven the formation of color signs in Kata Kolok. The few lexicographic investigations from other rural sign languages report limitations in the domain of color. On the other hand, larger, urban signed languages have extensive systems, for example, Australian Sign Language has up to nine color terms (Woodward 1989: 149). These comparisons support the finding that, rural sign languages like Kata Kolok fail to provide the societal pressures for the lexicon to expand further.
  • De Vries, M., Christiansen, M. H., & Petersson, K. M. (2011). Learning recursion: Multiple nested and crossed dependencies. Biolinguistics, 5(1/2), 010-035.

    Abstract

    Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from sequence input. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modelling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e., nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e., two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed.
  • Vuong, L., & Martin, R. C. (2011). LIFG-based attentional control and the resolution of lexical ambiguities in sentence context. Brain and Language, 116, 22-32. doi:10.1016/j.bandl.2010.09.012.

    Abstract

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words).
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Weber, A., Broersma, M., & Aoyagi, M. (2011). Spoken-word recognition in foreign-accented speech by L2 listeners. Journal of Phonetics, 39, 479-491. doi:10.1016/j.wocn.2010.12.004.

    Abstract

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to subsequent visual target words revealed that foreign-accented words can facilitate word recognition for L2 listeners if at least one of two requirements is met: the foreign-accented production is in accordance with the language background of the L2 listener, or the foreign accent is perceptually confusable with the standard pronunciation for the L2 listener. If neither one of the requirements is met, no facilitatory effect of foreign accents on L2 word recognition is found. Taken together, these findings suggest that linguistic experience with a foreign accent affects the ability to recognize words carrying this accent, and there is furthermore a general benefit for L2 listeners for recognizing foreign-accented words that are perceptually confusable with the standard pronunciation.
  • Wegener, C. (2011). Expression of reciprocity in Savosavo. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 213-224). Amsterdam: Benjamins.

    Abstract

    This paper describes how reciprocity is expressed in the Papuan (i.e. non-Austronesian­) language Savosavo, spoken in the Solomon Islands. The main strategy is to use the reciprocal nominal mapamapa, which can occur in different NP positions and always triggers default third person singular masculine agreement, regardless of the number and gender of the referents. After a description of this as well as another strategy that is occasionally used (the ‘joint activity construction’), the paper will provide a detailed analysis of data elicited with set of video stimuli and show that the main strategy is used to describe even clearly asymmetric situations, as long as more than one person acts on more than one person in a joint activity.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2011). CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 10, 451-456. doi:10.1111/j.1601-183X.2011.00684.x.

    Abstract

    Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males, 543 emales) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers, and with a consistent direction of effect (rs2710102, p = .0239; rs759178, p = .0248). Based on these findings we performed analyses of four-marker haplotypes of rs2710102- s759178-rs17236239-rs2538976, and identified significant association (haplotype TTAA, p = .049; haplotype GCAG, p = .0014). Our study suggests that common variants in the exon 13-15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.

    Additional information

    Whitehouse_Additional_Information.doc
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Additional information

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Zeshan, U., & Panda, S. (2011). Reciprocals constructions in Indo-Pakistani sign language. In N. Evans, & A. Gaby (Eds.), Reciprocals and semantic typology (pp. 91-113). Amsterdam: Benjamins.

    Abstract

    Indo-Pakistani Sign Language (IPSL) is the sign language used by deaf communities in a large region across India and Pakistan. This visual-gestural language has a dedicated construction for specifically expressing reciprocal relationships, which can be applied to agreement verbs and to auxiliaries. The reciprocal construction relies on a change in the movement pattern of the signs it applies to. In addition, IPSL has a number of other strategies which can have a reciprocal interpretation, and the IPSL lexicon includes a good number of inherently reciprocal signs. All reciprocal expressions can be modified in complex ways that rely on the grammatical use of the sign space. Considering grammaticalisation and lexicalisation processes linking some of these constructions is also important for a better understanding of reciprocity in IPSL.
  • Zwitserlood, I. (2011). Gebruiksgemak van het eerste Nederlandse Gebarentaal woordenboek kan beter [Book review]. Levende Talen Magazine, 4, 46-47.

    Abstract

    Review: User friendliness of the first dictionary of Sign Language of the Netherlands can be improved
  • Zwitserlood, I. (2011). Gevraagd: medewerkers verzorgingshuis met een goede oog-handcoördinatie. Het meten van NGT-vaardigheid. Levende Talen Magazine, 1, 44-46.

    Abstract

    (Needed: staff for residential care home with good eye-hand coordination. Measuring NGT-skills.)
  • Zwitserlood, I. (2011). Het Corpus NGT en de dagelijkse lespraktijk. Levende Talen Magazine, 6, 46.

    Abstract

    (The Corpus NGT and the daily practice of language teaching)
  • Zwitserlood, I. (2011). Het Corpus NGT en de opleiding leraar/tolk NGT. Levende Talen Magazine, 1, 40-41.

    Abstract

    (The Corpus NGT and teacher NGT/interpreter NGT training)

Share this page