Displaying 1 - 7 of 7
-
Barlas, P., Kyriakou, K., Guest, O., Kleanthous, S., & Otterbacher, J. (2021). To "see" is to stereotype: Image tagging algorithms, gender recognition, and the accuracy-fairness trade-off. Proceedings of the ACM on Human Computer Interaction, 4(CSCW3): 32. doi:10.1145/3432931.
Abstract
Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword. -
Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Kvinder, Køn & Forskning, 29(2), 60-73. doi:10.7146/kkf.v29i2.124899.
Abstract
This article sets out our perspective on how to begin the journey of decolonising computational fi elds, such as data and cognitive sciences. We see this struggle as requiring two basic steps: a) realisation that the present-day system has inherited, and still enacts, hostile, conservative, and oppressive behaviours and principles towards women of colour; and b) rejection of the idea that centring individual people is a solution to system-level problems. The longer we ignore these two steps, the more “our” academic system maintains its toxic structure, excludes, and harms Black women and other minoritised groups. This also keeps the door open to discredited pseudoscience, like eugenics and physiognomy. We propose that grappling with our fi elds’ histories and heritage holds the key to avoiding mistakes of the past. In contrast to, for example, initiatives such as “diversity boards”, which can be harmful because they superfi cially appear reformatory but nonetheless center whiteness and maintain the status quo. Building on the work of many women of colour, we hope to advance the dialogue required to build both a grass-roots and a top-down re-imagining of computational sciences — including but not limited to psychology, neuroscience, cognitive science, computer science, data science, statistics, machine learning, and artifi cial intelligence. We aspire to progress away from
these fi elds’ stagnant, sexist, and racist shared past into an ecosystem that welcomes and nurtures
demographically diverse researchers and ideas that critically challenge the status quo. -
Guest, O., & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789-802. doi:10.1177/1745691620970585.
Abstract
Psychology endeavors to develop theories of human capacities and behaviors on the basis of a variety of methodologies and dependent measures. We argue that one of the most divisive factors in psychological science is whether researchers choose to use computational modeling of theories (over and above data) during the scientific-inference process. Modeling is undervalued yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us toward better science by forcing us to conceptually analyze, specify, and formalize intuitions that otherwise remain unexamined—what we dub open theory. Constraining our inference process through modeling enables us to build explanatory and predictive theories. Here, we present scientific inference in psychology as a path function in which each step shapes the next. Computational modeling can constrain these steps, thus advancing scientific inference over and above the stewardship of experimental practice (e.g., preregistration). If psychology continues to eschew computational modeling, we predict more replicability crises and persistent failure at coherent theory building. This is because without formal modeling we lack open and transparent theorizing. We also explain how to formalize, specify, and implement a computational model, emphasizing that the advantages of modeling can be achieved by anyone with benefit to all. -
Bobadilla-Suarez, S., Guest, O., & Love, B. C. (2020). Subjective value and decision entropy are jointly encoded by aligned gradients across the human brain. Communications Biology, 3: 597. doi:10.1038/s42003-020-01315-3.
Abstract
Recent work has considered the relationship between value and confidence in both behavioural and neural representation. Here we evaluated whether the brain organises value and confidence signals in a systematic fashion that reflects the overall desirability of options. If so, regions that respond to either increases or decreases in both value and confidence should be widespread. We strongly confirmed these predictions through a model-based fMRI analysis of a mixed gambles task that assessed subjective value (SV) and inverse decision entropy (iDE), which is related to confidence. Purported value areas more strongly signalled iDE than SV, underscoring how intertwined value and confidence are. A gradient tied to the desirability of actions transitioned from positive SV and iDE in ventromedial prefrontal cortex to negative SV and iDE in dorsal medial prefrontal cortex. This alignment of SV and iDE signals could support retrospective evaluation to guide learning and subsequent decisions.Additional information
supplemental information -
Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B., Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., Benoit, R. G. and 177 moreBotvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B., Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., Benoit, R. G., Berkers, R., Bhanji, J. P., Biswal, B. B., Bobadilla-Suarez, S., Bortolini, T., Bottenhorn, K. L., Bowring, A., Braem, S., Brooks, H. R., Brudner, E. G., Calderon, C. B., Camilleri, J. A., Castrellon, J. J., Cecchetti, L., Cieslik, E. C., Cole, Z. J., Collignon, O., Cox, R. W., Cunningham, W. A., Czoschke, S., Dadi, K., Davis, C. P., De Luca, A., Delgado, M. R., Demetriou, L., Dennison, J. B., Di, X., Dickie, E. W., Dobryakova, E., Donnat, C. L., Dukart, J., Duncan, N. W., Durnez, J., Eed, A., Eickhoff, S. B., Erhart, A., Fontanesi, L., Fricke, G. M., Fu, S., Galván, A., Gau, R., Genon, S., Glatard, T., Glerean, E., Goeman, J. J., Golowin, S. A. E., González-García, C., Gorgolewski, K. J., Grady, C. L., Green, M. A., Guassi Moreira, J. F., Guest, O., Hakimi, S., Hamilton, J. P., Hancock, R., Handjaras, G., Harry, B. B., Hawco, C., Herholz, P., Herman, G., Heunis, S., Hoffstaedter, F., Hogeveen, J., Holmes, S., Hu, C.-P., Huettel, S. A., Hughes, M. E., Iacovella, V., Iordan, A. D., Isager, P. M., Isik, A. I., Jahn, A., Johnson, M. R., Johnstone, T., Joseph, M. J. E., Juliano, A. C., Kable, J. W., Kassinopoulos, M., Koba, C., Kong, X., Koscik, T. R., Kucukboyaci, N. E., Kuhl, B. A., Kupek, S., Laird, A. R., Lamm, C., Langner, R., Lauharatanahirun, N., Lee, H., Lee, S., Leemans, A., Leo, A., Lesage, E., Li, F., Li, M. Y. C., Lim, P. C., Lintz, E. N., Liphardt, S. W., Losecaat Vermeer, A. B., Love, B. C., Mack, M. L., Malpica, N., Marins, T., Maumet, C., McDonald, K., McGuire, J. T., Melero, H., Méndez Leal, A. S., Meyer, B., Meyer, K. N., Mihai, P. G., Mitsis, G. D., Moll, J., Nielson, D. M., Nilsonne, G., Notter, M. P., Olivetti, E., Onicas, A. I., Papale, P., Patil, K. R., Peelle, J. E., Pérez, A., Pischedda, D., Poline, J.-B., Prystauka, Y., Ray, S., Reuter-Lorenz, P. A., Reynolds, R. C., Ricciardi, E., Rieck, J. R., Rodriguez-Thompson, A. M., Romyn, A., Salo, T., Samanez-Larkin, G. R., Sanz-Morales, E., Schlichting, M. L., Schultz, D. H., Shen, Q., Sheridan, M. A., Silvers, J. A., Skagerlund, K., Smith, A., Smith, D. V., Sokol-Hessner, P., Steinkamp, S. R., Tashjian, S. M., Thirion, B., Thorp, J. N., Tinghög, G., Tisdall, L., Tompson, S. H., Toro-Serey, C., Torre Tresols, J. J., Tozzi, L., Truong, V., Turella, L., van 't Veer, A. E., Verguts, T., Vettel, J. M., Vijayarajah, S., Vo, K., Wall, M. B., Weeda, W. D., Weis, S., White, D. J., Wisniewski, D., Xifra-Porxas, A., Yearling, E. A., Yoon, S., Yuan, R., Yuen, K. S. L., Zhang, L., Zhang, X., Zosky, J. E., Nichols, T. E., Poldrack, R. A., & Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84-88. doi:10.1038/s41586-020-2314-9.
Abstract
Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses1. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2,3,4,5. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed. -
Guest, O., Caso, A., & Cooper, R. P. (2020). On simulating neural damage in connectionist networks. Computational Brain & Behavior, 3, 289-321. doi:10.1007/s42113-020-00081-z.
Abstract
A key strength of connectionist modelling is its ability to simulate both intact cognition and the behavioural effects of neural damage. We survey the literature, showing that models have been damaged in a variety of ways, e.g. by removing connections, by adding noise to connection weights, by scaling weights, by removing units and by adding noise to unit activations. While these different implementations of damage have often been assumed to be behaviourally equivalent, some theorists have made aetiological claims that rest on nonequivalence. They suggest that related deficits with different aetiologies might be accounted for by different forms of damage within a single model. We present two case studies that explore the effects of different forms of damage in two influential connectionist models, each of which has been applied to explain neuropsychological deficits. Our results indicate that the effect of simulated damage can indeed be sensitive to the way in which damage is implemented, particularly when the environment comprises subsets of items that differ in their statistical properties, but such effects are sensitive to relatively subtle aspects of the model’s training environment. We argue that, as a consequence, substantial methodological care is required if aetiological claims about simulated neural damage are to be justified, and conclude more generally that implementation assumptions, including those concerning simulated damage, must be fully explored when evaluating models of neurological deficits, both to avoid over-extending the explanatory power of specific implementations and to ensure that reported results are replicable. -
Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
Share this page