Publications

Displaying 201 - 300 of 394
  • Levelt, W. J. M. (1982). Cognitive styles in the use of spatial direction terms. In R. Jarvella, & W. Klein (Eds.), Speech, place, and action: Studies in deixis and related topics (pp. 251-268). Chichester: Wiley.
  • Levelt, C. C., Fikkert, P., & Schiller, N. O. (2003). Metrical priming in speech production. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2481-2485). Adelaide: Causal Productions.

    Abstract

    In this paper we report on four experiments in which we attempted to prime the stress position of Dutch bisyllabic target nouns. These nouns, picture names, had stress on either the first or the second syllable. Auditory prime words had either the same stress as the target or a different stress (e.g., WORtel – MOtor vs. koSTUUM – MOtor; capital letters indicate stressed syllables in prime – target pairs). Furthermore, half of the prime words were semantically related, the other half were unrelated. In none of the experiments a stress priming effect was found. This could mean that stress is not stored in the lexicon. An additional finding was that targets with initial stress had a faster response than targets with a final stress. We hypothesize that bisyllabic words with final stress take longer to be encoded because this stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular in terms of the metrical stress rules of Dutch.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1982). Linearization in describing spatial networks. In S. Peters, & E. Saarinen (Eds.), Processes, beliefs, and questions (pp. 199-220). Dordrecht - Holland: D. Reidel.

    Abstract

    The topic of this paper is the way in which speakers order information in discourse. I will refer to this issue with the term "linearization", and will begin with two types of general remarks. The first one concerns the scope and relevance of the problem with reference to some existing literature. The second set of general remarks will be about the place of linearization in a theory of the speaker. The following, and main part of this paper, will be a summary report of research of linearization in a limited, but well-defined domain of discourse, namely the description of spatial networks.
  • Levelt, W. J. M. (2000). Introduction Section VII: Language. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 843-844). Cambridge: MIT Press.
  • Levelt, W. J. M. (1980). On-line processing constraints on the properties of signed and spoken language. In U. Bellugi, & M. Studdert-Kennedy (Eds.), Signed and spoken language: Biological constraints on linguistic form (pp. 141-160). Weinheim: Verlag Chemie.

    Abstract

    It is argued that the dominantly successive nature of language is largely mode-independent and holds equally for sign and for spoken language. A preliminary distinction is made between what is simultaneous or successive in the signal, and what is in the process; these need not coincide, and it is the successiveness of the process that is at stake. It is then discussed extensively for the word/sign level, and in a more preliminary fashion for the clause and discourse level that online processes are parallel in that they can simultaneously draw on various sources of knowledge (syntactic, semantic, pragmatic), but successive in that they can work at the interpretation of only one unit at a time. This seems to hold for both sign and spoken language. In the final section, conjectures are made about possible evolutionary explanations for these properties of language processing.
  • Levelt, W. J. M. (2000). Psychology of language. In K. Pawlik, & M. R. Rosenzweig (Eds.), International handbook of psychology (pp. 151-167). London: SAGE publications.
  • Levelt, W. J. M. (2000). Speech production. In A. E. Kazdin (Ed.), Encyclopedia of psychology (pp. 432-433). Oxford University Press.
  • Levelt, W. J. M. (1974). Taalpsychologie: Van taalkunde naar psychologie. In Herstal-Conferentie.
  • Levelt, W. J. M., & Indefrey, P. (2000). The speaking mind/brain: Where do spoken words come from? In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, language, brain: Papers from the First Mind Articulation Project Symposium (pp. 77-94). Cambridge, Mass.: MIT Press.
  • Levelt, W. J. M. (1980). Toegepaste aspecten van het taal-psychologisch onderzoek: Enkele inleidende overwegingen. In J. Matter (Ed.), Toegepaste aspekten van de taalpsychologie (pp. 3-11). Amsterdam: VU Boekhandel.
  • Levinson, S. C. (2003). Spatial language. In L. Nadel (Ed.), Encyclopedia of cognitive science (pp. 131-137). London: Nature Publishing Group.
  • Levinson, S. C. (1982). Caste rank and verbal interaction in Western Tamilnadu. In D. B. McGilvray (Ed.), Caste ideology and interaction (pp. 98-203). Cambridge University Press.
  • Levinson, S. C. (2003). Contextualizing 'contextualization cues'. In S. Eerdmans, C. Prevignano, & P. Thibault (Eds.), Language and interaction: Discussions with John J. Gumperz (pp. 31-39). Amsterdam: John Benjamins.
  • Levinson, S. C. (2003). Language and cognition. In W. Frawley (Ed.), International Encyclopedia of Linguistics (pp. 459-463). Oxford: Oxford University Press.
  • Levinson, S. C. (2003). Language and mind: Let's get the issues straight! In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and cognition (pp. 25-46). Cambridge, MA: MIT Press.
  • Levinson, S. C., & Toni, I. (2019). Key issues and future directions: Interactional foundations of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 257-261). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2000). Language as nature and language as art. In J. Mittelstrass, & W. Singer (Eds.), Proceedings of the Symposium on ‘Changing concepts of nature and the turn of the Millennium (pp. 257-287). Vatican City: Pontificae Academiae Scientiarium Scripta Varia.
  • Levinson, S. C. (2000). H.P. Grice on location on Rossel Island. In S. S. Chang, L. Liaw, & J. Ruppenhofer (Eds.), Proceedings of the 25th Annual Meeting of the Berkeley Linguistic Society (pp. 210-224). Berkeley: Berkeley Linguistic Society.
  • Levinson, S. C. (2019). Interactional foundations of language: The interaction engine hypothesis. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 189-200). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2018). Introduction: Demonstratives: Patterns in diversity. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 1-42). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2019). Natural forms of purposeful interaction among humans: What makes interaction effective? In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 111-126). Cambridge, MA: MIT Press.
  • Levinson, S. C. (1982). Speech act theory: The state of the art. In V. Kinsella (Ed.), Surveys 2. Eight state-of-the-art articles on key areas in language teaching. Cambridge University Press.
  • Levinson, S. C. (2018). Yélî Dnye: Demonstratives in the language of Rossel Island, Papua New Guinea. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 318-342). Cambridge: Cambridge University Press.
  • Liszkowski, U., & Epps, P. (2003). Directing attention and pointing in infants: A cross-cultural approach. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 25-27). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877649.

    Abstract

    Recent research suggests that 12-month-old infants in German cultural settings have the motive of sharing their attention to and interest in various events with a social interlocutor. To do so, these preverbal infants predominantly use the pointing gesture (in this case the extended arm with or without extended index finger) as a means to direct another person’s attention. This task systematically investigates different types of motives underlying infants’ pointing. The occurrence of a protodeclarative (as opposed to protoimperative) motive is of particular interest because it requires an understanding of the recipient’s psychological states, such as attention and interest, that can be directed and accessed.
  • Liu, S., & Zhang, Y. (2019). Why some verbs are harder to learn than others – A micro-level analysis of everyday learning contexts for early verb learning. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2173-2178). Montreal, QB: Cognitive Science Society.

    Abstract

    Verb learning is important for young children. While most
    previous research has focused on linguistic and conceptual
    challenges in early verb learning (e.g. Gentner, 1982, 2006),
    the present paper examined early verb learning at the
    attentional level and quantified the input for early verb learning
    by measuring verb-action co-occurrence statistics in parent-
    child interaction from the learner’s perspective. To do so, we
    used head-mounted eye tracking to record fine-grained
    multimodal behaviors during parent-infant joint play, and
    analyzed parent speech, parent and infant action, and infant
    attention at the moments when parents produced verb labels.
    Our results show great variability across different action verbs,
    in terms of frequency of verb utterances, frequency of
    corresponding actions related to verb meanings, and infants’
    attention to verbs and actions, which provide new insights on
    why some verbs are harder to learn than others.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Mai, F., Galke, L., & Scherp, A. (2019). CBOW is not all you need: Combining CBOW with the compositional matrix space model. In Proceedings of the Seventh International Conference on Learning Representations (ICLR 2019). OpenReview.net.

    Abstract

    Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a
    learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
  • Mai, F., Galke, L., & Scherp, A. (2018). Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 169-178). New York: ACM.

    Abstract

    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  • Majid, A. (2018). Cultural factors shape olfactory language [Reprint]. In D. Howes (Ed.), Senses and Sensation: Critical and Primary Sources. Volume 3 (pp. 307-310). London: Bloomsbury Publishing.
  • Majid, A., & Bödeker, K. (2003). Folk theories of objects in motion. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 72-76). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877654.

    Abstract

    There are three main strands of research which have investigated people’s intuitive knowledge of objects in motion. (1) Knowledge of the trajectories of objects in motion; (2) knowledge of the causes of motion; and (3) the categorisation of motion as to whether it has been produced by something animate or inanimate. We provide a brief introduction to each of these areas. We then point to some linguistic and cultural differences which may have consequences for people’s knowledge of objects in motion. Finally, we describe two experimental tasks and an ethnographic task that will allow us to collect data in order to establish whether, indeed, there are interesting cross-linguistic/cross-cultural differences in lay theories of objects in motion.
  • Majid, A. (2018). Language and cognition. In H. Callan (Ed.), The International Encyclopedia of Anthropology. Hoboken: John Wiley & Sons Ltd.

    Abstract

    What is the relationship between the language we speak and the way we think? Researchers working at the interface of language and cognition hope to understand the complex interplay between linguistic structures and the way the mind works. This is thorny territory in anthropology and its closely allied disciplines, such as linguistics and psychology.

    Additional information

    home page encyclopedia
  • Majid, A. (2019). Preface. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. vii-viii). Amsterdam: Benjamins.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. In A. K. Goel, C. M. Seifert, & C. C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2275-2281). Montreal, QB: Cognitive Science Society.

    Abstract

    Studies have claimed that blind people’s spatial representations are different from sighted people, and blind people display superior auditory processing. Due to the nature of auditory and haptic information, it has been proposed that blind people have spatial representations that are more sequential than sighted people. Even the temporary loss of sight—such as through blindfolding—can affect spatial representations, but not much research has been done on this topic. We compared blindfolded and sighted people’s linguistic spatial expressions and non-linguistic localization accuracy to test how blindfolding affects the representation of path in auditory motion events. We found that blindfolded people were as good as sighted people when localizing simple sounds, but they outperformed sighted people when localizing auditory motion events. Blindfolded people’s path related speech also included more sequential, and less holistic elements. Our results indicate that even temporary loss of sight influences spatial representations of auditory motion events
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
    speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
    Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
    This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech.
  • Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Lombard speech, speech produced in noise, is
    typically produced with a higher fundamental
    frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
    show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
    in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
    resulting in the native language influencing the non-native Lombard speech.
  • McQueen, J. M., & Cho, T. (2003). The use of domain-initial strengthening in segmentation of continuous English speech. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2993-2996). Adelaide: Causal Productions.
  • McQueen, J. M., Dahan, D., & Cutler, A. (2003). Continuity and gradedness in speech processing. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 39-78). Berlin: Mouton de Gruyter.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Naming analog clocks conceptually facilitates naming digital clocks. In Proceedings of XIII Conference of the European Society of Cognitive Psychology (ESCOP 2003) (pp. 271-271).
  • Meira, S. (2003). 'Addressee effects' in demonstrative systems: The cases of Tiriyó and Brazilian Portugese. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 3-12). Amsterdam/Philadelphia: John Benjamins.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Meyer, A. S., & Dobel, C. (2003). Application of eye tracking in speech production research. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 253-272). Amsterdam: Elsevier.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Moscoso del Prado Martín, F., & Baayen, R. H. (2003). Using the structure found in time: Building real-scale orthographic and phonetic representations by accumulation of expectations. In H. Bowman, & C. Labiouse (Eds.), Connectionist Models of Cognition, Perception and Emotion: Proceedings of the Eighth Neural Computation and Psychology Workshop (pp. 263-272). Singapore: World Scientific.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Naffah, N., Kempen, G., Rohmer, J., Steels, L., Tsichritzis, D., & White, G. (1985). Intelligent Workstation in the office: State of the art and future perspectives. In J. Roukens, & J. Renuart (Eds.), Esprit '84: Status report of ongoing work (pp. 365-378). Amsterdam: Elsevier Science Publishers.
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2003). Verpleegsters, ambassadrices, and masseuses: Stratum differences in the comprehension of Dutch words with feminine agent suffixes. In L. Cornips, & P. Fikkert (Eds.), Linguistics in the Netherlands 2003. (pp. 117-127). Amsterdam: Benjamins.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norris, D., Cutler, A., McQueen, J. M., Butterfield, S., & Kearns, R. K. (2000). Language-universal constraints on the segmentation of English. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 43-46). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., Cutler, A., & McQueen, J. M. (2000). The optimal architecture for simulating spoken-word recognition. In C. Davis, T. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. Adelaide: Causal Productions.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of subcategorical mismatch in word forms. The source of TRACE's failure lay not in interactive connectivity, not in the presence of inter-word competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model, which has inter-word competition, phonemic representations and continuous optimisation (but no interactive connectivity).
  • O'Meara, C., Speed, L. J., San Roque, L., & Majid, A. (2019). Perception Metaphors: A view from diversity. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. 1-16). Amsterdam: Benjamins.

    Abstract

    Our bodily experiences play an important role in the way that we think and speak. Abstract language is, however, difficult to reconcile with this body-centred view, unless we appreciate the role metaphors play. To explore the role of the senses across semantic domains, we focus on perception metaphors, and examine their realisation across diverse languages, methods, and approaches. To what extent do mappings in perception metaphor adhere to predictions based on our biological propensities; and to what extent is there space for cross-linguistic and cross-cultural variation? We find that while some metaphors have widespread commonality, there is more diversity attested than should be comfortable for universalist accounts.
  • Oostdijk, N., & Broeder, D. (2003). The Spoken Dutch Corpus and its exploitation environment. In A. Abeille, S. Hansen-Schirra, & H. Uszkoreit (Eds.), Proceedings of the 4th International Workshop on linguistically interpreted corpora (LINC-03) (pp. 93-101).
  • Otake, T., & Cutler, A. (2000). A set of Japanese word cohorts rated for relative familiarity. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 766-769). Beijing: China Military Friendship Publish.

    Abstract

    A database is presented of relative familiarity ratings for 24 sets of Japanese words, each set comprising words overlapping in the initial portions. These ratings are useful for the generation of material sets for research in the recognition of spoken words.
  • Otake, T., & Cutler, A. (2003). Evidence against "units of perception". In S. Shohov (Ed.), Advances in psychology research (pp. 57-82). Hauppauge, NY: Nova Science.
  • Ouni, S., Cohen, M. M., Young, K., & Jesse, A. (2003). Internationalization of a talking head. In M. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of 15th International Congress of Phonetics Sciences (pp. 2569-2572). Barcelona: Casual Productions.

    Abstract

    In this paper we describe a general scheme for internationalization of our talking head, Baldi, to speak other languages. We describe the modular structure of the auditory/visual synthesis software. As an example, we have created a synthetic Arabic talker, which is evaluated using a noisy word recognition task comparing this talker with a natural one.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (2000). Differences in spatial conceptualization in Turkish and English discourse: Evidence from both speech and gesture. In A. Goksel, & C. Kerslake (Eds.), Studies on Turkish and Turkic languages (pp. 263-272). Wiesbaden: Harrassowitz.
  • Ozyurek, A., & Woll, B. (2019). Language in the visual modality: Cospeech gesture and sign language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 67-83). Cambridge, MA: MIT Press.
  • Ozyurek, A., & Ozcaliskan, S. (2000). How do children learn to conflate manner and path in their speech and gestures? Differences in English and Turkish. In E. V. Clark (Ed.), The proceedings of the Thirtieth Child Language Research Forum (pp. 77-85). Stanford: CSLI Publications.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Ozyurek, A. (2000). The influence of addressee location on spatial language and representational gestures of direction. In D. McNeill (Ed.), Language and gesture (pp. 64-83). Cambridge: Cambridge University Press.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Piai, V., & Zheng, X. (2019). Speaking waves: Neuronal oscillations in language production. In K. D. Federmeier (Ed.), Psychology of Learning and Motivation (pp. 265-302). Elsevier.

    Abstract

    Language production involves the retrieval of information from memory, the planning of an articulatory program, and executive control and self-monitoring. These processes can be related to the domains of long-term memory, motor control, and executive control. Here, we argue that studying neuronal oscillations provides an important opportunity to understand how general neuronal computational principles support language production, also helping elucidate relationships between language and other domains of cognition. For each relevant domain, we provide a brief review of the findings in the literature with respect to neuronal oscillations. Then, we show how similar patterns are found in the domain of language production, both through review of previous literature and novel findings. We conclude that neurophysiological mechanisms, as reflected in modulations of neuronal oscillations, may act as a fundamental basis for bringing together and enriching the fields of language and cognition.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2019). Acoustic specification of upper limb movement in voicing. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 68-74). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.
  • Pouw, W., & Dixon, J. A. (2019). Quantifying gesture-speech synchrony. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 75-80). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.

    Abstract

    Spontaneously occurring speech is often seamlessly accompanied by hand gestures. Detailed
    observations of video data suggest that speech and gesture are tightly synchronized in time,
    consistent with a dynamic interplay between body and mind. However, spontaneous gesturespeech
    synchrony has rarely been objectively quantified beyond analyses of video data, which
    do not allow for identification of kinematic properties of gestures. Consequently, the point in
    gesture which is held to couple with speech, the so-called moment of “maximum effort”, has
    been variably equated with the peak velocity, peak acceleration, peak deceleration, or the onset
    of the gesture. In the current exploratory report, we provide novel evidence from motiontracking
    and acoustic data that peak velocity is closely aligned, and shortly leads, the peak pitch
    (F0) of speech

    Additional information

    https://osf.io/9843h/
  • Räsänen, O., Seshadri, S., & Casillas, M. (2018). Comparison of syllabification algorithms and training strategies for robust word count estimation across different languages and recording conditions. In Proceedings of Interspeech 2018 (pp. 1200-1204). doi:10.21437/Interspeech.2018-1047.

    Abstract

    Word count estimation (WCE) from audio recordings has a number of applications, including quantifying the amount of speech that language-learning infants hear in their natural environments, as captured by daylong recordings made with devices worn by infants. To be applicable in a wide range of scenarios and also low-resource domains, WCE tools should be extremely robust against varying signal conditions and require minimal access to labeled training data in the target domain. For this purpose, earlier work has used automatic syllabification of speech, followed by a least-squares-mapping of syllables to word counts. This paper compares a number of previously proposed syllabifiers in the WCE task, including a supervised bi-directional long short-term memory (BLSTM) network that is trained on a language for which high quality syllable annotations are available (a “high resource language”), and reports how the alternative methods compare on different languages and signal conditions. We also explore additive noise and varying-channel data augmentation strategies for BLSTM training, and show how they improve performance in both matching and mismatching languages. Intriguingly, we also find that even though the BLSTM works on languages beyond its training data, the unsupervised algorithms can still outperform it in challenging signal conditions on novel languages.
  • Ravignani, A., Chiandetti, C., & Kotz, S. (2019). Rhythm and music in animal signals. In J. Choe (Ed.), Encyclopedia of Animal Behavior (vol. 1) (2nd ed., pp. 615-622). Amsterdam: Elsevier.
  • Ravignani, A., Garcia, M., Gross, S., de Reus, K., Hoeksema, N., Rubio-Garcia, A., & de Boer, B. (2018). Pinnipeds have something to say about speech and rhythm. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 399-401). Toruń, Poland: NCU Press. doi:10.12775/3991-1.095.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096.
  • Rissman, L., & Majid, A. (2019). Agency drives category structure in instrumental events. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2661-2667). Montreal, QB: Cognitive Science Society.

    Abstract

    Thematic roles such as Agent and Instrument have a long-standing place in theories of event representation. Nonetheless, the structure of these categories has been difficult to determine. We investigated how instrumental events, such as someone slicing bread with a knife, are categorized in English. Speakers described a variety of typical and atypical instrumental events, and we determined the similarity structure of their descriptions using correspondence analysis. We found that events where the instrument is an extension of an intentional agent were most likely to elicit similar language, highlighting the importance of agency in structuring instrumental categories.
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Rojas-Berscia, L. M. (2019). Nominalization in Shawi/Chayahuita. In R. Zariquiey, M. Shibatani, & D. W. Fleck (Eds.), Nominalization in languages of the Americas (pp. 491-514). Amsterdam: Benjamins.

    Abstract

    This paper deals with the Shawi nominalizing suffixes -su’~-ru’~-nu’ ‘general nominalizer’, -napi/-te’/-tun‘performer/agent nominalizer’, -pi’‘patient nominalizer’, and -nan ‘instrument nominalizer’. The goal of this article is to provide a description of nominalization in Shawi. Throughout this paper I apply the Generalized Scale Model (GSM) (Malchukov, 2006) to Shawi verbal nominalizations, with the intention of presenting a formal representation that will provide a basis for future areal and typological studies of nominalization. In addition, I dialogue with Shibatani’s model to see how the loss or gain of categories correlates with the lexical or grammatical nature of nominalizations. strong nominalization in Shawi correlates with lexical nominalization, whereas weak nominalizations correlate with grammatical nominalization. A typology which takes into account the productivity of the nominalizers is also discussed.
  • Rommers, J., & Federmeier, K. D. (2018). Electrophysiological methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 247-265). Hoboken: Wiley.
  • Rowland, C. F., & Kidd, E. (2019). Key issues and future directions: How do children acquire language? In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 181-185). Cambridge, MA: MIT Press.
  • Rubio-Fernández, P., & Jara-Ettinger, J. (2018). Joint inferences of speakers’ beliefs and referents based on how they speak. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 991-996). Austin, TX: Cognitive Science Society.

    Abstract

    For almost two decades, the poor performance observed with the so-called Director task has been interpreted as evidence of limited use of Theory of Mind in communication. Here we propose a probabilistic model of common ground in referential communication that derives three inferences from an utterance: what the speaker is talking about in a visual context, what she knows about the context, and what referential expressions she prefers. We tested our model by comparing its inferences with those made by human participants and found that it closely mirrors their judgments, whereas an alternative model compromising the hearer’s expectations of cooperativeness and efficiency reveals a worse fit to the human data. Rather than assuming that common ground is fixed in a given exchange and may or may not constrain reference resolution, we show how common ground can be inferred as part of the process of reference assignment.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • Rubio-Fernández, P. (2019). Theory of mind. In C. Cummins, & N. Katsos (Eds.), The Handbook of Experimental Semantics and Pragmatics (pp. 524-536). Oxford: Oxford University Press.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • Saleh, A., Beck, T., Galke, L., & Scherp, A. (2018). Performance comparison of ad-hoc retrieval models over full-text vs. titles of documents. In M. Dobreva, A. Hinze, & M. Žumer (Eds.), Maturity and Innovation in Digital Libraries: 20th International Conference on Asia-Pacific Digital Libraries, ICADL 2018, Hamilton, New Zealand, November 19-22, 2018, Proceedings (pp. 290-303). Cham, Switzerland: Springer.

    Abstract

    While there are many studies on information retrieval models using full-text, there are presently no comparison studies of full-text retrieval vs. retrieval only over the titles of documents. On the one hand, the full-text of documents like scientific papers is not always available due to, e.g., copyright policies of academic publishers. On the other hand, conducting a search based on titles alone has strong limitations. Titles are short and therefore may not contain enough information to yield satisfactory search results. In this paper, we compare different retrieval models regarding their search performance on the full-text vs. only titles of documents. We use different datasets, including the three digital library datasets: EconBiz, IREON, and PubMed. The results show that it is possible to build effective title-based retrieval models that provide competitive results comparable to full-text retrieval. The difference between the average evaluation results of the best title-based retrieval models is only 3% less than those of the best full-text-based retrieval models.
  • San Roque, L. (2018). Egophoric patterns in Duna verbal morphology. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 405-436). Amsterdam: Benjamins.

    Abstract

    In the language Duna (Trans New Guinea), egophoric distributional patterns are a pervasive characteristic of verbal morphology, but do not comprise a single coherent system. Many morphemes, including evidential markers and future time inflections, show strong tendencies to co-occur with ‘informant’ subjects (the speaker in a declarative, the addressee in an interrogative), or alternatively with non-informant subjects. The person sensitivity of the Duna forms is observable in frequency, speaker judgments of sayability, and subject implicatures. Egophoric and non-egophoric distributional patterns are motivated by the individual semantics of the morphemes, their perspective-taking properties, and logical and/or conventionalised expectations of how people experience and talk about events. Distributional tendencies can also be flouted, providing a resource for speakers to convey attitudes towards their own knowledge and experiences, or the knowledge and experiences of others.
  • San Roque, L., Floyd, S., & Norcliffe, E. (2018). Egophoricity: An introduction. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 1-78). Amsterdam: Benjamins.
  • San Roque, L., & Schieffelin, B. B. (2018). Learning how to know. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 437-471). Amsterdam: Benjamins. doi:10.1075/tsl.118.14san.

    Abstract

    Languages with egophoric systems require their users to pay special attention to who knows what in the speech situation, providing formal marking of whether the speaker or addressee has personal knowledge of the event being discussed. Such systems have only recently come to be studied in cross-linguistic perspective. This chapter has two aims in regard to contributing to our understanding of egophoric marking. Firstly, it presents relevant data from a relatively under-described and endangered language, Kaluli (aka Bosavi), spoken in Papua New Guinea. Unusually, Kaluli tense inflections appear to show a mix of both egophoric and first vs non-first person-marking features, as well as other contrasts that are broadly relevant to a typology of egophoricity, such as special constructions for the expression of involuntary experience. Secondly, the chapter makes a preliminary foray into issues concerning egophoric marking and child language, drawing on a naturalistic corpus of child-caregiver interactions. Questions for future investigation raised by the Kaluli data concern, for example, the potentially challenging nature of mastering inflections that are sensitive to both person and speech act type, the possible role of question-answer pairs in children’s acquisition of egophoric morphology, and whether there are special features of epistemic access and authority that relate particularly to child-adult interactions.
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research 2000 (pp. 987-994). Amsterdam: Elsevier.
  • Scharenborg, O., & Merkx, D. (2018). The role of articulatory feature representation quality in a computational model of human spoken-word recognition. In Proceedings of the Machine Learning in Speech and Language Processing Workshop (MLSLP 2018).

    Abstract

    Fine-Tracker is a speech-based model of human speech
    recognition. While previous work has shown that Fine-Tracker
    is successful at modelling aspects of human spoken-word
    recognition, its speech recognition performance is not
    comparable to that of human performance, possibly due to
    suboptimal intermediate articulatory feature (AF)
    representations. This study investigates the effect of improved
    AF representations, obtained using a state-of-the-art deep
    convolutional network, on Fine-Tracker’s simulation and
    recognition performance: Although the improved AF quality
    resulted in improved speech recognition; it, surprisingly, did
    not lead to an improvement in Fine-Tracker’s simulation power.
  • Scharenborg, O., Bouwman, G., & Boves, L. (2000). Connected digit recognition with class specific word models. In Proceedings of the COST249 Workshop on Voice Operated Telecom Services workshop (pp. 71-74).

    Abstract

    This work focuses on efficient use of the training material by selecting the optimal set of model topologies. We do this by training multiple word models of each word class, based on a subclassification according to a priori knowledge of the training material. We will examine classification criteria with respect to duration of the word, gender of the speaker, position of the word in the utterance, pauses in the vicinity of the word, and combinations of these. Comparative experiments were carried out on a corpus consisting of Dutch spoken connected digit strings and isolated digits, which are recorded in a wide variety of acoustic conditions. The results show, that classification based on gender of the speaker, position of the digit in the string, pauses in the vicinity of the training tokens, and models based on a combination of these criteria perform significantly better than the set with single models per digit.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.

Share this page