Displaying 1 - 17 of 17
  • Mak, M., & Willems, R. M. (2021). Mental simulation during literary reading. In D. Kuiken, & A. M. Jacobs (Eds.), Handbook of empirical literary studies (pp. 63-84). Berlin: De Gruyter.

    Abstract

    Readers experience a number of sensations during reading. They do
    not – or do not only – process words and sentences in a detached, abstract
    manner. Instead they “perceive” what they read about. They see descriptions of
    scenery, feel what characters feel, and hear the sounds in a story. These sensa-
    tions tend to be grouped under the umbrella terms “mental simulation” and
    “mental imagery.” This chapter provides an overview of empirical research on
    the role of mental simulation during literary reading. Our chapter also discusses
    what mental simulation is and how it relates to mental imagery. Moreover, it
    explores how mental simulation plays a role in leading models of literary read-
    ing and investigates under what circumstances mental simulation occurs dur-
    ing literature reading. Finally, the effect of mental simulation on the literary
    reader’s experience is discussed, and suggestions and unresolved issues in this
    field are formulated.
  • Collins, J. (2015). ‘Give’ and semantic maps. In B. Nolan, G. Rawoens, & E. Diedrichsen (Eds.), Causation, permission, and transfer: Argument realisation in GET, TAKE, PUT, GIVE and LET verbs (pp. 129-146). Amsterdam: John Benjamins.
  • Hanique, I., Aalders, E., & Ernestus, M. (2015). How robust are exemplar effects in word comprehension? In G. Jarema, & G. Libben (Eds.), Phonological and phonetic considerations of lexical processing (pp. 15-39). Amsterdam: Benjamins.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Hintz, F., & Huettig, F. (2015). The complexity of the visual environment modulates language-mediated eye gaze. In R. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 39-55). Berlin: Springer. doi:10.1007/978-81-322-2443-3_3.

    Abstract

    Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search.
  • Kruspe, N., Burenhult, N., & Wnuk, E. (2015). Northern Aslian. In P. Sidwell, & M. Jenny (Eds.), Handbook of Austroasiatic Languages (pp. 419-474). Leiden: Brill.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Hammond, J. (2014). Switch-reference antecedence and subordination in Whitesands (Oceanic). In R. van Gijn, J. Hammond, D. Matić, S. van Putten, & A. V. Galucio (Eds.), Information structure and reference tracking in complex sentences. (pp. 263-290). Amsterdam: Benjamins.

    Abstract

    Whitesands is an Oceanic language of the southern Vanuatu subgroup. Like the related languages of southern Vanuatu, Whitesands has developed a clause-linkage system which monitors referent continuity on new clauses – typically contrasting with the previous clause. In this chapter I address how the construction interacts with topic continuity in discourse. I outline the morphosyntactic form of this anaphoric co-reference device. From a functionalist perspective, I show how the system is used in natural discourse and discuss its restrictions with respect to relative and complement clauses. I conclude with a discussion on its interactions with theoretical notions of information structure – in particular the nature of presupposed versus asserted clauses, information back- and foregrounding and how these affect the use of the switch-reference system
  • Muysken, P., Hammarström, H., Birchall, J., Danielsen, S., Eriksen, L., Galucio, A. V., Van Gijn, R., Van de Kerke, S., Kolipakam, V., Krasnoukhova, O., Müller, N., & O'Connor, L. (2014). The languages of South America: Deep families, areal relationships, and language contact. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 299-323). Cambridge: Cambridge University Press.
  • O'Connor, L., & Kolipakam, V. (2014). Human migrations, dispersals, and contacts in South America. In L. O'Connor, & P. Muysken (Eds.), The native languages of South America: Origins, development, typology (pp. 29-55). Cambridge: Cambridge University Press.
  • Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.

    Abstract

    In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Van Putten, S. (2014). Left-dislocation and subordination in Avatime (Kwa). In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (Eds.), Information Structure and Reference Tracking in Complex Sentences. (pp. 71-98). Amsterdam: John Benjamins.

    Abstract

    Left dislocation is characterized by a sentence-initial element which is crossreferenced in the remainder of the sentence, and often set off by an intonation break. Because of these properties, left dislocation has been analyzed as an extraclausal phenomenon. Whether or not left dislocation can occur within subordinate clauses has been a matter of debate in the literature, but has never been checked against corpus data. This paper presents data from Avatime, a Kwa (Niger-Congo) language spoken in Ghana, showing that left dislocation occurs within subordinate clauses in spontaneous discourse. This poses a problem for the extraclausal analysis of left dislocation. I show that this problem can best be solved by assuming that Avatime allows the embedding of units larger than a clause
  • Verkerk, A. (2014). Where Alice fell into: Motion events from a parallel corpus. In B. Szmrecsanyi, & B. Wälchli (Eds.), Aggregating dialectology, typology, and register analysis: Linguistic variation in text and speech (pp. 324-354). Berlin: De Gruyter.
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction

    A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use.

    The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).

Share this page