Publications

Displaying 301 - 400 of 1241
  • Enfield, N. J., & Sidnell, J. (2012). Collateral effects, agency, and systems of language use [Reply to commentators]. Current Anthropology, 53(3), 327-329.
  • Enfield, N. J. (2012). [Review of the book "Language, culture, and mind: Natural constructions and social kinds", by Paul Kockelman]. Language in Society, 41(5), 674-677. doi:10.1017/S004740451200070X.
  • Enfield, N. J. (2005). Depictive and other secondary predication in Lao. In N. P. Himmelmann, & E. Schultze-Berndt (Eds.), Secondary predication and adverbial modification (pp. 379-392). Oxford: Oxford University Press.
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.
  • Enfield, N. J. (2005). [Review of the book Laughter in interaction by Philip Glenn]. Linguistics, 43(6), 1195-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2005). Micro and macro dimensions in linguistic systems. In S. Marmaridou, K. Nikiforidou, & E. Antonopoulou (Eds.), Reviewing linguistic thought: Converging trends for the 21st Century (pp. 313-326). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2014). Human agency and the infrastructure for requests. In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 35-50). Amsterdam: John Benjamins.

    Abstract

    This chapter discusses some of the elements of human sociality that serve as the social and cognitive infrastructure or preconditions for the use of requests and other kinds of recruitments in interaction. The notion of an agent with goals is a canonical starting point, though importantly agency tends not to be wholly located in individuals, but rather is socially distributed. This is well illustrated in the case of requests, in which the person or group that has a certain goal is not necessarily the one who carries out the behavior towards that goal. The chapter focuses on the role of semiotic (mostly linguistic) resources in negotiating the distribution of agency with request-like actions, with examples from video-recorded interaction in Lao, a language spoken in Laos and nearby countries. The examples illustrate five hallmarks of requesting in human interaction, which show some ways in which our ‘manipulation’ of other people is quite unlike our manipulation of tools: (1) that even though B is being manipulated, B wants to help, (2) that while A is manipulating B now, A may be manipulated in return later; (3) that the goal of the behavior may be shared between A and B, (4) that B may not comply, or may comply differently than requested, due to actual or potential contingencies, and (5) that A and B are accountable to one another; reasons may be asked for, and/or given, for the request. These hallmarks of requesting are grounded in a prosocial framework of human agency.
  • Enfield, N. J. (2012). Language innateness [Letter to the Editor]. The Times Literary Supplement, October 26, 2012(5717), 6.
  • Enfield, N. J., & Sidnell, J. (2014). Language presupposes an enchronic infrastructure for social interaction. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 92-104). Oxford: Oxford University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Interdisciplinary perspectives. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 599-602). Cambridge: Cambridge University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Introduction: Directions in the anthropology of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 1-24). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Natural causes of language: Frames, biases and cultural transmission. Berlin: Language Science Press. Retrieved from http://langsci-press.org/catalog/book/48.

    Abstract

    What causes a language to be the way it is? Some features are universal, some are inherited, others are borrowed, and yet others are internally innovated. But no matter where a bit of language is from, it will only exist if it has been diffused and kept in circulation through social interaction in the history of a community. This book makes the case that a proper understanding of the ontology of language systems has to be grounded in the causal mechanisms by which linguistic items are socially transmitted, in communicative contexts. A biased transmission model provides a basis for understanding why certain things and not others are likely to develop, spread, and stick in languages. Because bits of language are always parts of systems, we also need to show how it is that items of knowledge and behavior become structured wholes. The book argues that to achieve this, we need to see how causal processes apply in multiple frames or 'time scales' simultaneously, and we need to understand and address each and all of these frames in our work on language. This forces us to confront implications that are not always comfortable: for example, that "a language" is not a real thing but a convenient fiction, that language-internal and language-external processes have a lot in common, and that tree diagrams are poor conceptual tools for understanding the history of languages. By exploring avenues for clear solutions to these problems, this book suggests a conceptual framework for ultimately explaining, in causal terms, what languages are like and why they are like that.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (Eds.). (2014). The Cambridge handbook of linguistic anthropology. Cambridge: Cambridge University Press.
  • Enfield, N. J., Sidnell, J., & Kockelman, P. (2014). System and function. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 25-28). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2005). Review of the book [The Handbook of Historical Linguistics, edited by Brian D. Joseph and Richard D. Janda]. Linguistics, 43(6), 1191-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2012). The slow explosion of speech [Review of the book The origins of Grammar by James R. Hurford]. The Times Literary Supplement, March 30, 2012(5687), 11-12. Retrieved from http://www.the-tls.co.uk/tls/public/article1004404.ece.

    Abstract

    Book review of James R. Hurford THE ORIGINS OF GRAMMAR 791pp. Oxford University Press. ISBN 978 0 19 920787 9
  • Enfield, N. J. (2014). The item/system problem. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 48-77). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Transmission biases in the cultural evolution of language: Towards an explanatory framework. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 325-335). Oxford: Oxford University Press.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2012). Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia, 50, 2154-2164. doi:10.1016/j.neuropsychologia.2012.05.013.

    Abstract

    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesized that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested eighteen normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centered on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech.
  • Ernestus, M., Mak, W. M., & Baayen, R. H. (2005). Waar 't kofschip strandt. Levende Talen Magazine, 92, 9-11.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., & Giezenaar, G. (2014). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Vakwerk 9: Achtergronden van de NT2-lespraktijk: Lezingen conferentie Hoeven 2014 (pp. 81-92). Amsterdam: BV NT2.
  • Ernestus, M., & Mak, W. M. (2005). Analogical effects in reading Dutch verb forms. Memory & Cognition, 33(7), 1160-1173.

    Abstract

    Previous research has shown that the production of morphologically complex words in isolation is affected by the properties of morphologically, phonologically, or semantically similar words stored in the mental lexicon. We report five experiments with Dutch speakers that show that reading an inflectional word form in its linguistic context is also affected by analogical sets of formally similar words. Using the self-paced reading technique, we show in Experiments 1-3 that an incorrectly spelled suffix delays readers less if the incorrect spelling is in line with the spelling of verbal suffixes in other inflectional forms of the same verb. In Experiments 4 and 5, our use of the self-paced reading technique shows that formally similar words with different stems affect the reading of incorrect suffixal allomorphs on a given stem. These intra- and interparadigmatic effects in reading may be due to online processes or to the storage of incorrect forms resulting from analogical effects in production.
  • Ernestus, M. (2012). Segmental within-speaker variation. In A. C. Cohn, C. Fougeron, & M. K. Huffman (Eds.), The Oxford handbook of laboratory phonology (pp. 93-102). New York: Oxford University Press.
  • Ernestus, M., Kočková-Amortová, L., & Pollak, P. (2014). The Nijmegen corpus of casual Czech. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 365-370).

    Abstract

    This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available
  • Escudero, P., Simon, E., & Mitterer, H. (2012). The perception of English front vowels by North Holland and Flemish listeners: Acoustic similarity predicts and explains cross-linguistic and L2 perception. Journal of Phonetics, 40, 280-288. doi:10.1016/j.wocn.2011.11.004.

    Abstract

    We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns.
  • Estruch, S. B., Buzon, V., Carbo, L. R., Schorova, L., Luders, J., & Estebanez-Perpina, E. (2012). The oncoprotein BCL11A binds to Orphan Nuclear Receptor TLX and potentiates its transrepressive function. PLoS One, 7(6): e37963. doi:10.1371/journal.pone.0037963.

    Abstract

    Nuclear orphan receptor TLX (NR2E1) functions primarily as a transcriptional repressor and its pivotal role in brain development, glioblastoma, mental retardation and retinopathologies make it an attractive drug target. TLX is expressed in the neural stem cells (NSCs) of the subventricular zone and the hippocampus subgranular zone, regions with persistent neurogenesis in the adult brain, and functions as an essential regulator of NSCs maintenance and self-renewal. Little is known about the TLX social network of interactors and only few TLX coregulators are described. To identify and characterize novel TLX-binders and possible coregulators, we performed yeast-two-hybrid (Y2H) screens of a human adult brain cDNA library using different TLX constructs as baits. Our screens identified multiple clones of Atrophin-1 (ATN1), a previously described TLX interactor. In addition, we identified an interaction with the oncoprotein and zinc finger transcription factor BCL11A (CTIP1/Evi9), a key player in the hematopoietic system and in major blood-related malignancies. This interaction was validated by expression and coimmunoprecipitation in human cells. BCL11A potentiated the transrepressive function of TLX in an in vitro reporter gene assay. Our work suggests that BCL11A is a novel TLX coregulator that might be involved in TLX-dependent gene regulation in the brain.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., & Scholte, H. S. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proceedings of the National Academy of Sciences of the United States of America, 109(52), 21504-21509. doi:10.1073/pnas.1207414110.
  • Fawcett, C., & Liszkowski, U. (2012). Infants anticipate others’ social preferences. Infant and Child Development, 21, 239-249. doi:10.1002/icd.739.

    Abstract

    In the current eye-tracking study, we explored whether 12-month-old infants can predict others' social preferences. We showed infants scenes in which two characters alternately helped or hindered an agent in his goal of climbing a hill. In a control condition, the two characters moved up and down the hill in identical ways to the helper and hinderer but did not make contact with the agent; thus, they did not cause him to reach or not reach her or his goal. Following six alternating familiarization trials of helping and hindering interactions (help-hinder condition) or up and down interactions (up-down condition), infants were shown one test trial in which they could visually anticipate the agent approaching one of the two characters. As predicted, infants in the help-hinder condition made significantly more visual anticipations toward the helping than hindering character, suggesting that they predicted the agent to approach the helping character. In contrast, infants revealed no difference in visual anticipations between the up and down characters. The up-down condition served to control for low-level perceptual explanations of the results for the help-hinder condition. Thus, together the results reveal that 12-month-old infants make predictions about others' behaviour and social preferences from a third-party perspective.
  • Fawcett, C., & Liszkowski, U. (2012). Mimicry and play initiation in 18-month-old infants. Infant Behavior and Development, 35, 689-696. doi:10.1016/j.infbeh.2012.07.014.

    Abstract

    Across two experiments, we examined the relationship between 18-month-old infants’ mimicry and social behavior – particularly invitations to play with an adult play partner. In Experiment 1, we manipulated whether an adult mimicked the infant's play or not during an initial play phase. We found that infants who had been mimicked were subsequently more likely to invite the adult to join their play with a new toy. In addition, they reenacted marginally more steps from a social learning demonstration she gave. In Experiment 2, infants had the chance to spontaneously mimic the adult during the play phase. Complementing Experiment 1, those infants who spent more time mimicking the adult were more likely to invite her to play with a new toy. This effect was specific to play and not apparent in other communicative acts, such as directing the adult's attention to an event or requesting toys. Together, the results suggest that infants use mimicry as a tool to establish social connections with others and that mimicry has specific influences on social behaviors related to initiating subsequent joint interactions.
  • Fawcett, C., & Liszkowski, U. (2012). Observation and initiation of joint action in infants. Child Development, 83, 434-441. doi:10.1111/j.1467-8624.2011.01717.x.

    Abstract

    Infants imitate others’ individual actions, but do they also replicate others’ joint activities? To examine whether observing joint action influences infants’ initiation of joint action, forty-eight 18-month-old infants observed object demonstrations by 2 models acting together (joint action), 2 models acting individually (individual action), or 1 model acting alone (solitary action). Infants’ behavior was examined after they were given each object. Infants in the joint action condition attempted to initiate joint action more often than infants in the other conditions, yet they were equally likely to communicate for other reasons and to imitate the demonstrated object-directed actions. The findings suggest that infants learn to replicate others’ joint activity through observation, an important skill for cultural transmission of shared practices.
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • Fedden, S., & Boroditsky, L. (2012). Spatialization of time in Mian. Frontiers in Psychology, 3, 485. doi:10.3389/fpsyg.2012.00485.

    Abstract

    We examine representations of time among the Mianmin of Papua New Guinea. We begin by describing the patterns of spatial and temporal reference in Mian. Mian uses a system of spatial terms that derive from the orientation and direction of the Hak and Sek rivers and the surrounding landscape. We then report results from a temporal arrangement task administered to a group of Mian speakers. The results reveal evidence for a variety of temporal representations. Some participants arranged time with respect to their bodies (left to right or toward the body). Others arranged time as laid out on the landscape, roughly along the east/west axis (either east to west or west to east). This absolute pattern is consistent both with the axis of the motion of the sun and the orientation of the two rivers, which provides the basis for spatial reference in the Mian language. The results also suggest an increase in left-to-right temporal representations with increasing years of formal education (and the reverse pattern for absolute spatial representations for time). These results extend previous work on spatial representations for time to a new geographical region, physical environment, and linguistic and cultural system.
  • Ferreri, A., Ponzoni, M., Govi, S., Pasini, E., Mappa, S., Vino, A., Facchetti, F., Vezzoli, P., Doglioni, C., Berti, E., & Dolcetti, R. (2012). Prevalence of chlamydial infection in a series of 108 primary cutaneous lymphomas. British Journal of Dermatology, 166(5), 1121-1123. doi:10.1111/j.1365-2133.2011.10704.x.
  • Fessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L. and 3 moreFessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L., Smolich, L., Sun, D. M., & Voracek, M. (2012). Testing a postulated case of intersexual selection in humans: The role of foot size in judgments of physical attractiveness and age. Evolution and Human Behavior, 33, 147-164. doi:10.1016/j.evolhumbehav.2011.08.002.

    Abstract

    The constituents of attractiveness differ across the sexes. Many relevant traits are dimorphic, suggesting that they are the product of intersexual selection. However, direction of causality is generally difficult to determine, as aesthetic criteria can as readily result from, as cause, dimorphism. Women have proportionately smaller feet than men. Prior work on the role of foot size in attractiveness suggests an asymmetry across the sexes, as small feet enhance female appearance, yet average, rather than large, feet are preferred on men. Previous investigations employed crude stimuli and limited samples. Here, we report on multiple cross-cultural studies designed to overcome these limitations. With the exception of one rural society, we find that small foot size is preferred when judging women, yet no equivalent preference applies to men. Similarly, consonant with the thesis that a preference for youth underlies intersexual selection acting on women, we document an inverse relationship between foot size and perceived age. Examination of preferences regarding, and inferences from, feet viewed in isolation suggests different roles for proportionality and absolute size in judgments of female and male bodies. Although the majority of these results bolster the conclusion that pedal dimorphism is the product of intersexual selection, the picture is complicated by the reversal of the usual preference for small female feet found in one rural society. While possibly explicable in terms of greater emphasis on female economic productivity relative to beauty, the latter finding underscores the importance of employing diverse samples when exploring postulated evolved aesthetic preferences.

    Additional information

    Fessler_2011_Suppl_material.pdf
  • Filippi, P., Charlton, B. D., & Fitch, W. T. (2012). Do Women Prefer More Complex Music around Ovulation? PLoS One, 7(4): e35626. doi:10.1371/journal.pone.0035626.

    Abstract

    The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.
  • Filippi, P. (2005). Gilbert Ryle: Pensare la Mente. Master Thesis, Università degli Studi di Palermo, Palermo.

    Abstract

    This study focuses on the main work of Gilbert Ryle, “The concept of Mind” (1949). Here the author demolishes what he refers to as the cartesian dogma of “the ghost in the machine”, highlighting the absurdity of categorical ordering in dualist systems, where mental activities are explained as separate from physical actions. Surprisingly, the Italian translator of “The concept of Mind”, Ferruccio Rossi-Landi, missed this key aspect of Ryle’s work, writing up what resulted into a significantly misleading translation. This can be clearly noticed from the title already: “Lo spirito come comportamento” [The ghost as behavior]. This erroneous translation affected the interpretation of “The concept of Mind” as a mere study on behavioral reductionism in Italy. Here, I argue in favor of the originality of Ryle’s approach in pointing out the socio-cultural dynamics as the non - physical dimensions of the human mind, and yet, linked to the human brain. In doing so, I trace the crucial influence of Wittgenstein’s philosophy in Ryle’s interpretation of the concept of mind, which helps in grasping a better understanding of his work. Wittgenstein’s influence shows clearly in Ryle’s conceptual operation of grounding the acquisition of dispositions and competences - which ultimately define the rational subjects as rational agents – in the shared background of social and cultural dynamics. In a nutshell, this social dimension is the defining characteristic of the human mind and of all human actions in Ryle’s philosophy. As Ryle argues in “On thinking” (1979), this intrinsic quality of human actions can reveal itself in actions that one performs absent-mindendly in everyday life, as well as in more complex ones: for instance, when the mind reflects upon itself.
  • Filippi, P. (2014). Linguistic animals: understanding language through a comparative approach. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 74-81). doi:10.1142/9789814603638_0082.

    Abstract

    With the aim to clarify the definition of humans as “linguistic animals”, in the present paper I functionally distinguish three types of language competences: i) language as a general biological tool for communication, ii) “perceptual syntax”, iii) propositional language. Following this terminological distinction, I review pivotal findings on animals' communication systems, which constitute useful evidence for the investigation of the nature of three core components of humans' faculty of language: semantics, syntax, and theory of mind. In fact, despite the capacity to process and share utterances with an open-ended structure is uniquely human, some isolated components of our linguistic competence are in common with nonhuman animals. Therefore, as I argue in the present paper, the investigation of animals' communicative competence provide crucial insights into the range of cognitive constraints underlying humans' ability of language, enabling at the same time the analysis of its phylogenetic path as well as of the selective pressures that have led to its emergence.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). The effect of pitch enhancement on spoken language acquisition. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 437-438). doi:10.1142/9789814603638_0082.

    Abstract

    The aim of this study is to investigate the word-learning phenomenon utilizing a new model that integrates three processes: a) extracting a word out of a continuous sounds sequence, b) inducing referential meanings, c) mapping a word onto its intended referent, with the possibility to extend the acquired word over a potentially infinite sets of objects of the same semantic category, and over not-previously-heard utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. In order to examine the multilayered word-learning task, we integrate these two strands of investigation into a single approach. We have conducted the study on adults and included six different experimental conditions, each including specific perceptual manipulations of the signal. In condition 1, the only cue to word-meaning mapping was the co-occurrence between words and referents (“statistical cue”). This cue was present in all the conditions. In condition 2, we added infant-directed-speech (IDS) typical pitch enhancement as a marker of the target word and of the statistical cue. In condition 3 we placed IDS typical pitch enhancement on random words of the utterances, i.e. inconsistently matching the statistical cue. In conditions 4, 5 and 6 we manipulated respectively duration, a non-prosodic acoustic cue and a visual cue as markers of the target word and of the statistical cue. Systematic comparisons between learning performance in condition 1 with the other conditions revealed that the word-learning process is facilitated only when pitch prominence consistently marks the target word and the statistical cue…
  • Filippi, P. (2012). Sintassi, Prosodia e Socialità: le Origini del Linguaggio Verbale. PhD Thesis, Università degli Studi di Palermo, Palermo.

    Abstract

    What is the key cognitive ability that makes humans unique among all the other animals? Our work aims at contributing to this research question adopting a comparative and philosophical approach to the origins of verbal language. In particular, we adopt three strands of analysis that are relevant in the context of comparative investigation on the the origins of verbal language: a) research on the evolutionary ‘homologies’, which provides information on the phylogenetic traits that humans and other primates share with their common ancestor; b) investigations on “analogous” traits, aimed at finding the evolutionary pressures that guided the emergence of the same biological traits that evolved independently in phylogenetically distant species; the ontogenetic development of the ability to produce and understand verbal language in human infants. Within this comparative approach, we focus on three key apsects that we addressed bridging recent empiric evidence on language processing with philosophical investigations on verbal language: (i) pattern processing as a biologocal precursor of syntax and algebraic rule acquisition, (ii) sound modulation as a guide to pattern comprehension in speech, animal vocalization and music, (iii) social strategies for mutual understanding, survival and group cohesion. We conclude emphasizing the interplay between these three sets of cognitive processes as a fundamental dimension grounding the emergence of the human ability for propositional language.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Fisher, S. E., Hatchwell, E., Chand, A., Ockenden, N., Monaco, A. P., & Craig, I. W. (1995). Construction of two YAC contigs in human Xp11.23-p11.22, one encompassing the loci OATL1, GATA, TFE3, and SYP, the other linking DXS255 to DXS146. Genomics, 29(2), 496-502. doi:10.1006/geno.1995.9976.

    Abstract

    We have constructed two YAC contigs in the Xp11.23-p11.22 interval of the human X chromosome, a region that was previously poorly characterized. One contig, of at least 1.4 Mb, links the pseudogene OATL1 to the genes GATA1, TFE3, and SYP and also contains loci implicated in Wiskott-Aldrich syndrome and synovial sarcoma. A second contig, mapping proximal to the first, is estimated to be over 2.1 Mb and links the hypervariable locus DXS255 to DXS146, and also contains a chloride channel gene that is responsible for hereditary nephrolithiasis. We have used plasmid rescue, inverse PCR, and Alu-PCR to generate 20 novel markers from this region, 1 of which is polymorphic, and have positioned these relative to one another on the basis of YAC analysis. The order of previously known markers within our contigs, Xpter-OATL1-GATA-TFE3-SYP-DXS255146- Xcen, agrees with genomic pulsed-field maps of the region. In addition, we have constructed a rare-cutter restriction map for a 710-kb region of the DXS255-DXS146 contig and have identified three CPG islands. These contigs and new markers will provide a useful resource for more detailed analysis of Xp11.23-p11.22, a region implicated in several genetic diseases.
  • Fisher, S. E. (2005). Dissection of molecular mechanisms underlying speech and language disorders. Applied Psycholinguistics, 26, 111-128. doi:10.1017/S0142716405050095.

    Abstract

    Developmental disorders affecting speech and language are highly heritable, but very little is currently understood about the neuromolecular mechanisms that underlie these traits. Integration of data from diverse research areas, including linguistics, neuropsychology, neuroimaging, genetics, molecular neuroscience, developmental biology, and evolutionary anthropology, is becoming essential for unraveling the relevant pathways. Recent studies of the FOXP2 gene provide a case in point. Mutation of FOXP2 causes a rare form of speech and language disorder, and the gene appears to be a crucial regulator of embryonic development for several tissues. Molecular investigations of the central nervous system indicate that the gene may be involved in establishing and maintaining connectivity of corticostriatal and olivocerebellar circuits in mammals. Notably, it has been shown that FOXP2 was subject to positive selection in recent human evolution. Consideration of findings from multiple levels of analysis demonstrates that FOXP2 cannot be characterized as “the gene for speech,” but rather as one critical piece of a complex puzzle. This story gives a flavor of what is to come in this field and indicates that anyone expecting simple explanations of etiology or evolution should be prepared for some intriguing surprises.
  • Fisher, S. E., Van Bakel, I., Lloyd, S. E., Pearce, S. H. S., Thakker, R. V., & Craig, I. W. (1995). Cloning and characterization of CLCN5, the human kidney chloride channel gene implicated in Dent disease (an X-linked hereditary nephrolithiasis). Genomics, 29, 598-606. doi:10.1006/geno.1995.9960.

    Abstract

    Dent disease, an X-linked familial renal tubular disorder, is a form of Fanconi syndrome associated with proteinuria, hypercalciuria, nephrocalcinosis, kidney stones, and eventual renal failure. We have previously used positional cloning to identify the 3' part of a novel kidney-specific gene (initially termed hClC-K2, but now referred to as CLCN5), which is deleted in patients from one pedigree segregating Dent disease. Mutations that disrupt this gene have been identified in other patients with this disorder. Here we describe the isolation and characterization of the complete open reading frame of the human CLCN5 gene, which is predicted to encode a protein of 746 amino acids, with significant homology to all known members of the ClC family of voltage-gated chloride channels. CLCN5 belongs to a distinct branch of this family, which also includes the recently identified genes CLCN3 and CLCN4. We have shown that the coding region of CLCN5 is organized into 12 exons, spanning 25-30 kb of genomic DNA, and have determined the sequence of each exon-intron boundary. The elucidation of the coding sequence and exon-intron organization of CLCN5 will both expedite the evaluation of structure/function relationships of these ion channels and facilitate the screening of other patients with renal tubular dysfunction for mutations at this locus.
  • Fisher, S. E. (2005). On genes, speech, and language. The New England Journal of Medicine: NEJM / Publ. by the Massachusetts Medical Society, 353, 1655-1657. doi:10.1056/NEJMp058207.

    Abstract

    Learning to talk is one of the most important milestones in human development, but we still have only a limited understanding of the way in which the process occurs. It normally takes just a few years to go from babbling newborn to fluent communicator. During this period, the child learns to produce a rich array of speech sounds through intricate control of articulatory muscles, assembles a vocabulary comprising thousands of words, and deduces the complicated structural rules that permit construction of meaningful sentences. All of this (and more) is achieved with little conscious effort.

    Files private

    Request files
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (Eds.). (2012). Pattern perception and computational complexity [Special Issue]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598).
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (2012). Pattern perception and computational complexity: Introduction to the special issue. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598), 1925-1932. doi:10.1098/rstb.2012.0099.

    Abstract

    Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue.
  • Fitz, H. (2014). Computermodelle für Spracherwerb und Sprachproduktion. Forschungsbericht 2014 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2014. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/7850678/Psycholinguistik_JB_2014?c=8236817.

    Abstract

    Relative clauses are a syntactic device to create complex sentences and they make language structurally productive. Despite a considerable number of experimental studies, it is still largely unclear how children learn relative clauses and how these are processed in the language system. Researchers at the MPI for Psycholinguistics used a computational learning model to gain novel insights into these issues. The model explains the differential development of relative clauses in English as well as cross-linguistic differences
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Floyd, S. (2012). Book review of [Poeticas de vida en espacios de muerte: Ge´ nero, poder y estado en la contidianeidad warao [Poetics of life in spaces of death: Gender, power and the state in Warao everyday life] Charles L. Briggs. Quito, Ecuador: Abya Yala, 2008. 460 pp.]. American Anthropologist, 114, 543 -544. doi:10.1111/j.1548-1433.2012.01461_1.x.

    Abstract

    No abstract is available for this article
  • Floyd, S. (2014). 'We’ as social categorization in Cha’palaa: A language of Ecuador. In T.-S. Pavlidou (Ed.), Constructing collectivity: 'We' across languages and contexts (pp. 135-158). Amsterdam: Benjamins.

    Abstract

    This chapter connects the grammar of the first person collective pronoun in the Cha’palaa language of Ecuador with its use in interaction for collective reference and social category membership attribution, addressing the problem posed by the fact that non-singular pronouns do not have distributional semantics (“speakers”) but are rather associational (“speaker and relevant associates”). It advocates a cross-disciplinary approach that jointly considers elements of linguistic form, situated usages of those forms in instances of interaction, and the broader ethnographic context of those instances. Focusing on large-scale and relatively stable categories such as racial and ethnic groups, it argues that looking at how speakers categorize themselves and others in the speech situation by using pronouns provides empirical data on the status of macro-social categories for members of a society

    Files private

    Request files
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2014). Four types of reduplication in the Cha'palaa language of Ecuador. In H. van der Voort, & G. Goodwin Gómez (Eds.), Reduplication in Indigenous Languages of South America (pp. 77-114). Leiden: Brill.
  • Floyd, S. (2005). The poetics of evidentiality in South American storytelling. In L. Harper, & C. Jany (Eds.), Proceedings from the Eighth Workshop on American Indigenous languages (pp. 28-41). Santa Barbara, Cal: University of California, Santa Barbara. (Santa Barbara Papers in Linguistics; 46).
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Fonteijn, H. M., Modat, M., Clarkson, M. J., Barnes, J., Lehmann, M., Hobbs, N. Z., Scahill, R. I., Tabrizi, S. J., Ourselin, S., Fox, N. C., & Alexander, D. C. (2012). An event-based model for disease progression and its application in familial Alzheimer's disease and Huntington's disease. NeuroImage, 60, 1880-1889. doi:10.1016/j.neuroimage.2012.01.062.

    Abstract

    Understanding the progression of neurological diseases is vital for accurate and early diagnosis and treatment planning. We introduce a new characterization of disease progression, which describes the disease as a series of events, each comprising a significant change in patient state. We provide novel algorithms to learn the event ordering from heterogeneous measurements over a whole patient cohort and demonstrate using combined imaging and clinical data from familial-Alzheimer's and Huntington's disease cohorts. Results provide new detail in the progression pattern of these diseases, while confirming known features, and give unique insight into the variability of progression over the cohort. The key advantage of the new model and algorithms over previous progression models is that they do not require a priori division of the patients into clinical stages. The model and its formulation extend naturally to a wide range of other diseases and developmental processes and accommodate cross-sectional and longitudinal input data.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information
  • Forkel, S. J. (2014). Identification of anatomical predictors of language recovery after stroke with diffusion tensor imaging. PhD Thesis, King's College London, London.

    Abstract

    Background Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. However, the predictors of recovery are still poorly understood. Anatomical variability of the arcuate fasciculus, connecting Broca’s and Wernicke’s areas, has been reported in the healthy population using diffusion tensor imaging tractography. In about 40% of the population the arcuate fasciculus is bilateral and this pattern is advantageous for certain language related functions, such as auditory verbal learning (Catani et al. 2007). Methods In this prospective longitudinal study, anatomical predictors of post-stroke aphasia recovery were investigated using diffusion tractography and arterial spin labelling. Patients An 18-subject strong aphasia cohort with first-ever unilateral left hemispheric middle cerebral artery infarcts underwent post stroke language (mean 5±5 days) and neuroimaging (mean 10±6 days) assessments and neuropsychological follow-up at six months. Ten of these patients were available for reassessment one year after symptom onset. Aphasia was assessed with the Western Aphasia Battery, which provides a global measure of severity (Aphasia Quotient, AQ). Results Better recover from aphasia was observed in patients with a right arcuate fasciculus [beta=.730, t(2.732), p=.020] (tractography) and increased fractional anisotropy in the right hemisphere (p<0.05) (Tract-based spatial statistics). Further, an increase in left hemisphere perfusion was observed after one year (p<0.01) (perfusion). Lesion analysis identified maximal overlay in the periinsular white matter (WM). Lesion-symptom mapping identified damage to periinsular structure as predictive for overall aphasia severity and damage to frontal lobe white matter as predictive of repetition deficits. Conclusion These findings suggest an important role for the right hemisphere language network in recovery from aphasia after left hemispheric stroke.

    Additional information

    Link to repository
  • Forkel, S. J., Thiebaut de Schotten, M., Kawadler, J. M., Dell'Acqua, F., Danek, A., & Catani, M. (2014). The anatomy of fronto-occipital connections from early blunt dissections to contemporary tractography. Cortex, 56, 73-84. doi:10.1016/j.cortex.2012.09.005.

    Abstract

    The occipital and frontal lobes are anatomically distant yet functionally highly integrated to generate some of the most complex behaviour. A series of long associative fibres, such as the fronto-occipital networks, mediate this integration via rapid feed-forward propagation of visual input to anterior frontal regions and direct top–down modulation of early visual processing.

    Despite the vast number of anatomical investigations a general consensus on the anatomy of fronto-occipital connections is not forthcoming. For example, in the monkey the existence of a human equivalent of the ‘inferior fronto-occipital fasciculus’ (iFOF) has not been demonstrated. Conversely, a ‘superior fronto-occipital fasciculus’ (sFOF), also referred to as ‘subcallosal bundle’ by some authors, is reported in monkey axonal tracing studies but not in human dissections.

    In this study our aim is twofold. First, we use diffusion tractography to delineate the in vivo anatomy of the sFOF and the iFOF in 30 healthy subjects and three acallosal brains. Second, we provide a comprehensive review of the post-mortem and neuroimaging studies of the fronto-occipital connections published over the last two centuries, together with the first integral translation of Onufrowicz's original description of a human fronto-occipital fasciculus (1887) and Muratoff's report of the ‘subcallosal bundle’ in animals (1893).

    Our tractography dissections suggest that in the human brain (i) the iFOF is a bilateral association pathway connecting ventro-medial occipital cortex to orbital and polar frontal cortex, (ii) the sFOF overlaps with branches of the superior longitudinal fasciculus (SLF) and probably represents an ‘occipital extension’ of the SLF, (iii) the subcallosal bundle of Muratoff is probably a complex tract encompassing ascending thalamo-frontal and descending fronto-caudate connections and is therefore a projection rather than an associative tract.

    In conclusion, our experimental findings and review of the literature suggest that a ventral pathway in humans, namely the iFOF, mediates a direct communication between occipital and frontal lobes. Whether the iFOF represents a unique human pathway awaits further ad hoc investigations in animals.
  • Forkstam, C., & Petersson, K. M. (2005). Towards an explicit account of implicit learning. Current Opinion in Neurology, 18(4), 435-441.

    Abstract

    Purpose of review: The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Recent findings: Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. Summary: The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.
  • Forkstam, C., & Petersson, K. M. (2005). Syntactic classification of acquired structural regularities. In G. B. Bruna, & L. Barsalou (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 696-701).

    Abstract

    In this paper we investigate the neural correlates of syntactic classification of an acquired grammatical sequence structure in an event-related FMRI study. During acquisition, participants were engaged in an implicit short-term memory task without performance feedback. We manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli independently in order to investigate their role in artificial grammar acquisition. The participants performed reliably above chance on the classification task. We observed a partly overlapping corticostriatal processing network activated by both manipulations including inferior prefrontal, cingulate, inferior parietal regions, and the caudate nucleus. More specifically, the left inferior frontal BA 45 and the caudate nucleus were sensitive to syntactic violations and endorsement, respectively. In contrast, these structures were insensitive to the frequency-based manipulation.
  • Franceschini, R. (2012). Wolfgang Klein und die LiLi [Laudatio]. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168), 5-7.
  • Francisco, A. A., Jesse, A., Groen, M. a., & McQueen, J. M. (2014). Audiovisual temporal sensitivity in typical and dyslexic adult readers. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014) (pp. 2575-2579).

    Abstract

    Reading is an audiovisual process that requires the learning of systematic links between graphemes and phonemes. It is thus possible that reading impairments reflect an audiovisual processing deficit. In this study, we compared audiovisual processing in adults with developmental dyslexia and adults without reading difficulties. We focused on differences in cross-modal temporal sensitivity both for speech and for non-speech events. When compared to adults without reading difficulties, adults with developmental dyslexia presented a wider temporal window in which unsynchronized speech events were perceived as synchronized. No differences were found between groups for the non-speech events. These results suggests a deficit in dyslexia in the perception of cross-modal temporal synchrony for speech events.
  • Franken, M. K., Huizinga, C. S. M., & Schiller, N. O. (2012). De grafemische buffer: Aspecten van een spellingstoornis. Stem- Spraak- en Taalpathologie, 17(3), 17-36.

    Abstract

    A spelling disorder that received much attention recently is the so-called graphemic buffer impairment. Caramazza et al. (1987) presented the first systematic case study of a patient with this disorder. Miceli & Capasso (2006) provide an extensive overview of the relevant literature. This article adds to the literature by describing a Dutch case, i.e. patient BM. We demonstrate how specific features of Dutch and Dutch orthography interact with the graphemic buffer impairment. In addition, we paid special attention to the influence of grapheme position on the patient’s spelling accuracy. For this we used, in contrast with most of the previous literature, the proportional accountability method described in Machtynger & Shallice (2009). We show that by using this method the underlying error distribution can be more optimally captured than with classical methods. The result of this analysis replicates two distributions that have been previously reported in the literature. Finally, attention will be paid to the role of phonology in the described disorder.
  • Frauenfelder, U. H., & Cutler, A. (1985). Preface. Linguistics, 23(5). doi:10.1515/ling.1985.23.5.657.
  • French, C. A., Jin, X., Campbell, T. G., Gerfen, E., Groszer, M., Fisher, S. E., & Costa, R. M. (2012). An aetiological Foxp2 mutation causes aberrant striatal activity and alters plasticity during skill learning. Molecular Psychiatry, 17, 1077-1085. doi:10.1038/mp.2011.105.

    Abstract

    Mutations in the human FOXP2 gene cause impaired speech development and linguistic deficits, which have been best characterised in a large pedigree called the KE family. The encoded protein is highly conserved in many vertebrates and is expressed in homologous brain regions required for sensorimotor integration and motor-skill learning, in particular corticostriatal circuits. Independent studies in multiple species suggest that the striatum is a key site of FOXP2 action. Here, we used in vivo recordings in awake-behaving mice to investigate the effects of the KE-family mutation on the function of striatal circuits during motor-skill learning. We uncovered abnormally high ongoing striatal activity in mice carrying an identical mutation to that of the KE family. Furthermore, there were dramatic alterations in striatal plasticity during the acquisition of a motor skill, with most neurons in mutants showing negative modulation of firing rate, starkly contrasting with the predominantly positive modulation seen in control animals. We also observed striking changes in the temporal coordination of striatal firing during motor-skill learning in mutants. Our results indicate that FOXP2 is critical for the function of striatal circuits in vivo, which are important not only for speech but also for other striatal-dependent skills.

    Additional information

    French_2011_Supplementary_Info.pdf
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Frost, R. (2014). Learning grammatical structures with and without sleep. PhD Thesis, Lancaster University, Lancaster.
  • Frost, R. L. A., Gaskell, G., Warker, J., Guest, J., Snowdon, R., & Stackhouse, A. (2012). Sleep Facilitates Acquisition of Implicit Phonotactic Constraints in Speech Production. Journal of sleep research, 21(s1), 249-249. doi:10.1111/j.1365-2869.2012.01044.x.

    Abstract

    Sleep plays an important role in neural reorganisation which underpins memory consolidation. The gradual replacement of
    hippocampal binding of new memories with intracortical connections helps to link new memories to existing knowledge. This process appears to be faster for memories which fit more easily into existing schemas. Here we seek to investigate whether this more rapid consolidation of schema-conformant information is facilitated by
    sleep, and the neural basis of this process.
  • De la Fuente, J., Santiago, J., Roma, A., Dumitrache, C., & Casasanto, D. (2012). Facing the past: cognitive flexibility in the front-back mapping of time [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Poster Presentations, 13(Suppl. 1), S58.

    Abstract

    In many languages the future is in front and the past behind, but in some cultures (like Aymara) the past is in front. Is it possible to find this mapping as an alternative conceptualization of time in other cultures? If so, what are the factors that affect its choice out of the set of available alternatives? In a paper and pencil task, participants placed future or past events either in front or behind a character (a schematic head viewed from above). A sample of 24 Islamic participants (whose language also places the future in front and the past behind) tended to locate the past event in the front box more often than Spanish participants. This result might be due to the greater cultural value assigned to tradition in Islamic culture. The same pattern was found in a sample of Spanish elders (N = 58), what may support that conclusion. Alternatively, the crucial factor may be the amount of attention paid to the past. In a final study, young Spanish adults (N = 200) who had just answered a set of questions about their past showed the past-in-front pattern, whereas questions about their future exacerbated the future-in-front pattern. Thus, the attentional explanation was supported: attended events are mapped to front space in agreement with the experiential connection between attending and seeing. When attention is paid to the past, it tends to occupy the front location in spite of available alternative mappings in the language-culture.
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R. (2012). Caused motion events in Turkish: Verbal and gestural representation in adults and children. PhD Thesis, Radboud University Nijmegen/LOT.

    Abstract

    Caused motion events (e.g. a boy pulls a box into a room) are basic events where an Agent (the boy) performs an Action (pulling) that causes a Figure (box) to move in a spatial Path (into) to a Goal (the room). These semantic elements are mapped onto lexical and syntactic structures differently across languages This dissertation investigates the encoding of caused motion events in Turkish, and the development of this encoding in speech and gesture. First, a linguistic analysis shows that Turkish does not fully fit into the expected typological patterns, and that the encoding of caused motion is determined by the fine-grained lexical semantics of a verb as well as the syntactic construction the verb is integrated into. A grammaticality judgment study conducted with adult Turkish speakers further establishes the fundamentals of the encoding patterns. An event description study compares adults’ verbal and gestural representations of caused motion to those of children aged 3 to 5. The findings indicate that although language-specificity is evident in children’s speech and gestures, the development of adult patterns takes time and occurs after the age of 5. A final study investigates a longitudinal video corpus of the spontaneous speech of Turkish-speaking children aged 1 to 3, and finds that language-specificity is evident from the start in both children’s speech and gesture. Apart from contributing to the literature on the development of Turkish, this dissertation furthers our understanding of the interaction between language-specificity and the multimodal expression of semantic information in event descriptions.
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Gaby, A. R. (2005). Some participants are more equal than others: Case and the composition of arguments in Kuuk Thaayorre. In M. Amberber, & H. d. Hoop (Eds.), Competition and variation in natural languages: the case for the case (pp. 9-39). Amsterdam: Elsevier.
  • Gaby, A. (2012). The Thaayorre lexicon of putting and taking. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 233-252). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics and relative distributions of verbs describing putting and taking events in Kuuk Thaayorre, a Pama-Nyungan language of Cape York (Australia). Thaayorre put/take verbs can be subcategorised according to whether they may combine with an NP encoding a goal, an NP encoding a source, or both. Goal NPs are far more frequent in natural discourse: initial analysis shows 85% of goal-oriented verb tokens to be accompanied by a goal NP, while only 31% of source-oriented verb tokens were accompanied by a source. This finding adds weight to Ikegami’s (1987) assertion of the conceptual primacy of goals over sources, reflected in a cross-linguistic dissymmetry whereby goal-marking is less marked and more widely used than source-marking.
  • Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2012). From gr8 to great: Lexical access to SMS shortcuts. Frontiers in Psychology, 3, 150. doi:10.3389/fpsyg.2012.00150.

    Abstract

    Many contemporary texts include shortcuts, such as cu or phones4u. The aim of this study was to investigate how the meanings of shortcuts are retrieved. A primed lexical decision paradigm was used with shortcuts and the corresponding words as primes. The target word was associatively related to the meaning of the whole prime (cu/see you – goodbye), to a component of the prime (cu/see you – look), or unrelated to the prime. In Experiment 1, primes were presented for 57 ms. For both word and shortcut primes, responses were faster to targets preceded by whole-related than by unrelated primes. No priming from component-related primes was found. In Experiment 2, the prime duration was 1000 ms. The priming effect seen in Experiment 1 was replicated. Additionally, there was priming from component-related word primes, but not from component-related shortcut primes. These results indicate that the meanings of shortcuts can be retrieved without translating them first into corresponding words.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Gao, X., Levinthal, B. R., & Stine-Morrow, E. A. L. (2012). The effects of ageing and visual noise on conceptual integration during sentence reading. Quarterly journal of experimental psychology, 65(9), 1833-1847. doi:10.1080/17470218.2012.674146.

    Abstract

    The effortfulness hypothesis implies that difficulty in decoding the surface form, as in the case of age-related sensory limitations or background noise, consumes the attentional resources that are then unavailable for semantic integration in language comprehension. Because ageing is associated with sensory declines, degrading of the surface form by a noisy background can pose an extra challenge for older adults. In two experiments, this hypothesis was tested in a self-paced moving window paradigm in which younger and older readers' online allocation of attentional resources to surface decoding and semantic integration was measured as they read sentences embedded in varying levels of visual noise. When visual noise was moderate (Experiment 1), resource allocation among young adults was unaffected but older adults allocated more resources to decode the surface form at the cost of resources that would otherwise be available for semantic processing; when visual noise was relatively intense (Experiment 2), both younger and older participants allocated more attention to the surface form and less attention to semantic processing. The decrease in attentional allocation to semantic integration resulted in reduced recall of core ideas in both experiments, suggesting that a less organized semantic representation was constructed in noise. The greater vulnerability of older adults at relatively low levels of noise is consistent with the effortfulness hypothesis.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gast, V., & Levshina, N. (2014). Motivating w(h)-Clefts in English and German: A hypothesis-driven parallel corpus study. In A.-M. De Cesare (Ed.), Frequency, Forms and Functions of Cleft Constructions in Romance and Germanic: Contrastive, Corpus-Based Studies (pp. 377-414). Berlin: De Gruyter.
  • Gayán, J., Willcutt, E. G., Fisher, S. E., Francks, C., Cardon, L. R., Olson, R. K., Pennington, B. F., Smith, S., Monaco, A. P., & DeFries, J. C. (2005). Bivariate linkage scan for reading disability and attention-deficit/hyperactivity disorder localizes pleiotropic loci. Journal of Child Psychology and Psychiatry, 46(10), 1045-1056. doi:10.1111/j.1469-7610.2005.01447.x.

    Abstract

    BACKGROUND: There is a growing interest in the study of the genetic origins of comorbidity, a direct consequence of the recent findings of genetic loci that are seemingly linked to more than one disorder. There are several potential causes for these shared regions of linkage, but one possibility is that these loci may harbor genes with manifold effects. The established genetic correlation between reading disability (RD) and attention-deficit/hyperactivity disorder (ADHD) suggests that their comorbidity is due at least in part to genes that have an impact on several phenotypes, a phenomenon known as pleiotropy. METHODS: We employ a bivariate linkage test for selected samples that could help identify these pleiotropic loci. This linkage method was employed to carry out the first bivariate genome-wide analysis for RD and ADHD, in a selected sample of 182 sibling pairs. RESULTS: We found evidence for a novel locus at chromosome 14q32 (multipoint LOD=2.5; singlepoint LOD=3.9) with a pleiotropic effect on RD and ADHD. Another locus at 13q32, which had been implicated in previous univariate scans of RD and ADHD, seems to have a pleiotropic effect on both disorders. 20q11 is also suggested as a pleiotropic locus. Other loci previously implicated in RD or ADHD did not exhibit bivariate linkage. CONCLUSIONS: Some loci are suggested as having pleiotropic effects on RD and ADHD, while others might have unique effects. These results highlight the utility of this bivariate linkage method to study pleiotropy.
  • Gebre, B. G., & Wittenburg, P. (2012). Adaptive automatic gesture stroke detection. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 458-461).

    Abstract

    Print Friendly XML Gebre, Binyam Gebrekidan, Max Planck Institute for Psycholinguistics, The Netherlands, binyamgebrekidan.gebre [at] mpi.nl Wittenburg, Peter, Max Planck Institute for Psycholinguistics, The Netherlands, peter.wittenburg [at] mpi.nl Introduction Many gesture and sign language researchers manually annotate video recordings to systematically categorize, analyze and explain their observations. The number and kinds of annotations are so diverse and unpredictable that any attempt at developing non-adaptive automatic annotation systems is usually less effective. The trend in the literature has been to develop models that work for average users and for average scenarios. This approach has three main disadvantages. First, it is impossible to know beforehand all the patterns that could be of interest to all researchers. Second, it is practically impossible to find enough training examples for all patterns. Third, it is currently impossible to learn a model that is robustly applicable across all video quality-recording variations.
  • Gebre, B. G., Wittenburg, P., Heskes, T., & Drude, S. (2014). Motion history images for online speaker/signer diarization. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1537-1541). Piscataway, NJ: IEEE.

    Abstract

    We present a solution to the problem of online speaker/signer diarization - the task of determining "who spoke/signed when?". Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of a publicly available dataset (the AMI meeting data) and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences.

    Files private

    Request files
  • Gebre, B. G., Wittenburg, P., Drude, S., Huijbregts, M., & Heskes, T. (2014). Speaker diarization using gesture and speech. In H. Li, & P. Ching (Eds.), Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 582-586).

    Abstract

    We demonstrate how the problem of speaker diarization can be solved using both gesture and speaker parametric models. The novelty of our solution is that we approach the speaker diarization problem as a speaker recognition problem after learning speaker models from speech samples corresponding to gestures (the occurrence of gestures indicates the presence of speech and the location of gestures indicates the identity of the speaker). This new approach offers many advantages: comparable state-of-the-art performance, faster computation and more adaptability. In our implementation, parametric models are used to model speakers' voice and their gestures: more specifically, Gaussian mixture models are used to model the voice characteristics of each person and all persons, and gamma distributions are used to model gestural activity based on features extracted from Motion History Images. Tests on 4.24 hours of the AMI meeting data show that our solution makes DER score improvements of 19% on speech-only segments and 4% on all segments including silence (the comparison is with the AMI system).
  • Gebre, B. G., Wittenburg, P., & Lenkiewicz, P. (2012). Towards automatic gesture stroke detection. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 231-235). European Language Resources Association.

    Abstract

    Automatic annotation of gesture strokes is important for many gesture and sign language researchers. The unpredictable diversity of human gestures and video recording conditions require that we adopt a more adaptive case-by-case annotation model. In this paper, we present a work-in progress annotation model that allows a user to a) track hands/face b) extract features c) distinguish strokes from non-strokes. The hands/face tracking is done with color matching algorithms and is initialized by the user. The initialization process is supported with immediate visual feedback. Sliders are also provided to support a user-friendly adjustment of skin color ranges. After successful initialization, features related to positions, orientations and speeds of tracked hands/face are extracted using unique identifiable features (corners) from a window of frames and are used for training a learning algorithm. Our preliminary results for stroke detection under non-ideal video conditions are promising and show the potential applicability of our methodology.
  • Gebre, B. G., Crasborn, O., Wittenburg, P., Drude, S., & Heskes, T. (2014). Unsupervised feature learning for visual sign language identification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 370-376). Redhook, NY: Curran Proceedings.

    Abstract

    Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on video samples involving 30 signers (running for a total of 6 hours). Using leave-one-signer-out cross-validation, our evaluation on short video samples shows an average best accuracy of 84%. Given that sign languages are under-resourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification.
  • Gentzsch, W., Lecarpentier, D., & Wittenburg, P. (2014). Big data in science and the EUDAT project. In Proceeding of the 2014 Annual SRII Global Conference.
  • Gialluisi, A., Pippucci, T., Anikster, Y., Ozbek, U., Medlej-Hashim, M., Mégarbané, A., & Romeo, G. (2012). Estimating the allele frequency of autosomal recessive disorders through mutational records and consanguinity: The homozygosity index (HI). Annals of Human Genetics, 76, 159-167. doi:10.1111/j.1469-1809.2011.00693.x.

    Abstract

    In principle mutational records make it possible to estimate frequencies of disease alleles (q) for autosomal recessive disorders using a novel approach based on the calculation of the Homozygosity Index (HI), i.e., the proportion of homozygous patients, which is complementary to the proportion of compound heterozygous patients P(CH). In other words, the rarer the disorder, the higher will be the HI and the lower will be the P(CH). To test this hypothesis we used mutational records of individuals affected with Familial Mediterranean Fever (FMF) and Phenylketonuria (PKU), born to either consanguineous or apparently unrelated parents from six population samples of the Mediterranean region. Despite the unavailability of precise values of the inbreeding coefficient for the general population, which are needed in the case of apparently unrelated parents, our estimates of q are very similar to those of previous descriptive epidemiological studies. Finally, we inferred from simulation studies that the minimum sample size needed to use this approach is 25 patients either with unrelated or first cousin parents. These results show that the HI can be used to produce a ranking order of allele frequencies of autosomal recessive disorders, especially in populations with high rates of consanguineous marriages.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gisladottir, R. S., Chwilla, D., Schriefers, H., & Levinson, S. C. (2012). Speech act recognition in conversation: Experimental evidence. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1596-1601). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2012/papers/0282/index.html.

    Abstract

    Recognizing the speech acts in our interlocutors’ utterances is a crucial prerequisite for conversation. However, it is not a trivial task given that the form and content of utterances is frequently underspecified for this level of meaning. In the present study we investigate participants’ competence in categorizing speech acts in such action-underspecific sentences and explore the time-course of speech act inferencing using a self-paced reading paradigm. The results demonstrate that participants are able to categorize the speech acts with very high accuracy, based on limited context and without any prosodic information. Furthermore, the results show that the exact same sentence is processed differently depending on the speech act it performs, with reading times starting to differ already at the first word. These results indicate that participants are very good at “getting” the speech acts, opening up a new arena for experimental research on action recognition in conversation.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2005). Acquiring auditory and phonetic categories. In H. Cohen, & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (pp. 497-513). Amsterdam: Elsevier.
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.

Share this page