Between continuities and ruptures: the representation of scientists and science based on images generated by ChatGPT

Authors

DOI:

https://doi.org/10.11606/issn.2316-9125.v29i1p127-146

Keywords:

Artificial intelligence, Representation, Stereotypes, Scientist, Science

Abstract

Stereotypical images of scientists and science may interfere with the trust of the public and movements toward or away from the scientific field. Given the increasing use of artificial intelligence in contemporary times, we aim to analyze images generated by ChatGPT regarding “scientist” and “science” to find continuities and ruptures in the maintenance of racial, gender, and age stereotypes, as well as those that point to the exact and biological sciences as a reference model. Overall, results suggest the reaffirmation of the scientist as a white man, as well as of exact and biological sciences as central in society. They also indicate changes in scientific practice, shifting from the individual to the collective, bringing an optimistic view regarding the inclusion of young scientists and the ease of access to hypertechnological resources.

Downloads

Download data is not yet available.

Author Biographies

  • Luiz Felipe Fernandes Neves, Universidade Federal de Goiás

    Doutor em Ciências pela Fundação Oswaldo Cruz (IOC/Fiocruz). Pesquisador do Instituto Nacional de Comunicação Pública da Ciência e Tecnologia (INCT-CPCT). Jornalista da Universidade Federal de Goiás (UFG).

  • Amanda Medeiros, Instituto Nacional de Comunicação Pública da Ciência e Tecnologia

    Pesquisadora de pós-doutorado no Instituto Nacional de Comunicação Pública da Ciência e Tecnologia (INCT-CPCT). Bolsista do Programa de Pós-Doutorado Júnior (PDJ) da Faperj.

  • Luisa Massarani, Instituto Nacional de Comunicação Pública da Ciência e Tecnologia

    Coordenadora do Instituto Nacional de Comunicação Pública da Ciência e Tecnologia (INCT-CPCT); pesquisadora da Casa de Oswaldo Cruz, Fundação Oswaldo Cruz. Bolsista Produtividade do CNPq 1B; Cientista do Nosso Estado da Faperj.

References

AMARASEKARA, Inoka; GRANT, Will J. Exploring the YouTube science communication gender gap: A sentiment analysis. Public Understanding of Science, Thousand Oaks, v. 28, n. 1, p. 68-84, 2019. DOI: https://doi.org/10.1177/096366251878665

ASH, Elliott et al. Visual representation and stereotypes in news media. SSRN, New York, p. 1-26, 2021.

BENDER, Emily et al. On the dangers of stochastic parrots: can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, [s. l.], p. 610-623, 2021. DOI: https://doi.org/10.1145/3442188.3445922

BETKER, James et al. Improving image generation with better captions. OpenAI, [s. l.], p. 1-19, 2023. DOI: https://cdn.openai.com/papers/dall-e-3.pdf

BIANCHI, Federico et al. Easily Accessible text-to-image generation amplifies demographic stereotypes at large scale. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, [s. l.], p. 1493-1504, 2023. DOI: https://doi.org/10.1145/3593013.3594095

CENTRO DE GESTÃO E ESTUDOS ESTRATÉGICOS (CGEE). Percepção pública da C&T no Brasil – 2019. Resumo executivo. CGEE, [s. l.], 2019. Disponível em: https://www.cgee.org.br/web/percepcao

CHAKRAVERTY, Snehashish; MAHATO, Nisha Rani; SAHOO, Deepti Moyi.. McCulloch-Pitts neural network model. In: Concepts of Soft Computing, New York, p. 167-173, 2019. DOI: https://doi.org/10.1007/978-981-13-7430-2_11

CHAMBERS, David Wade. Stereotypic images of the scientist: the draw a scientist test. Science Education, [s. l.], v. 67, n. 2, p. 255-265, 1983. Disponível em: https://acesse.dev/He0JM. Acesso em: 1 jul. 2024.

CHIMBA, Mwenya Diana; KITZINGER, Jenny. Bimbo or boffin? Women in science: an analysis of media representations and how female scientists negotiate cultural contradictions. Public Understanding of Science, Thousand Oaks, p. 609-624, 2010. Disponível em: https://www.researchgate.net/publication/51108862_Bimbo_or_boffin_Women_in_science_An_analysis_of_media_representations_and_how_female_scientists_negotiate_cultural_contradictions. Acesso em: 1 jul. 2024.

COZMAN, Fabio Gagliardi. Inteligência Artificial: uma utopia, uma distopia. Teccogs: Revista Digital de Tecnologias Cognitivas, São Paulo, v. 17, n. 17, p. 32-43, 2018. DOI: https://doi.org/10.23925/1984-3585.2018i17p32-43

DAMASCENO, Daniel et al. Injustiça epistêmica e reafirmação de estereótipos: a representação do cientista no Fantástico e Domingo Espetacular durante a pandemia da Covid-19. Contracampo, Niterói, n. 1, v. 43, p. 1-17, 2024. DOI: https://doi.org/10.22409/contracampo.v43i1.61118

FEUERRIEGEL, Stefan et al. Generative AI. Business & Information Systems Engineering, New York, v. 66, n. 1, p. 111-126, 202. DOI: https://doi.org/10.1007/s12599-023-00834-7

FLICKER, Eva. Between brains and breasts ‒ women scientists in fiction film: on the marginalization and sexualization of scientific competence. Public Understanding of Science, Thousand Oaks, v. 12, n. 3, p. 307-318, 2003. DOI: https://doi.org/10.1177/0963662503123009

FRANKENBERG, Ruth (ed.). Displacing whiteness: Essays in social and cultural criticism. Durham: Duke University Press, 1997.

FRASER, Kathleen C.; KIRITCHENKO, Sevtlana; NEJADGHOLI, Isar. Diversity is not a one-way street: pilot study on ethical interventions for racial bias in text-to-image systems. Proceedings of the 14th International Conference on Computational Creativity, [s. l.], 2023. Disponível em: https://computationalcreativity.net/iccc23/papers/ICCC-2023_paper_97.pdf. Acesso em: 1 jul. 2024.

GAMKRELIDZE, Tamari; ZOUINAR, Moustafa; BARCELLINI, Flore. Working with Machine Learning/Artificial Intelligence systems: workers’ viewpoints and experiences. Proceedings of the 32nd European Conference on Cognitive Ergonomics, [s. l.], p. 1-7, 2021. DOI: https://doi.org/10.1145/3452853.3452876

HALL, Stuart. Cultura e representação. Rio de Janeiro: Apicuri/PUC-Rio, 2016.

Idem. Representation: cultural representations and signifying practices. Londres: Sage, 1997.

HARAWAY, Donna. Saberes localizados: a questão da ciência para o feminismo e o privilégio da perspectiva parcial. Cadernos Pagu, São Paulo, n. 5, p. 7-41, 1995. Disponível em: https://periodicos.sbu.unicamp.br/ojs/index.php/cadpagu/article/view/1773

HARDING, Sandra. Strong objectivity: A response to the new objectivity question. Synthese, New York, v. 104, n. 3, p. 331-349, 1995. Disponível em: https://link.springer.com/article/10.1007/BF01064504

HAYNES, Roslynn D. The scientist in literature: images and stereotypes-their importance. Interdisciplinary Science Reviews, [s. l.], v. 14, n. 4, p. 384-398, 1989. DOI: https://doi.org/10.1179/isr.1989.14.4.384

KALOTA, Faisal. A primer on generative artificial intelligence. Education Sciences, v. 14, n. 2, 2024. DOI: https://doi.org/10.3390/educsci14020172

KING, Morgan. Harmful biases in artificial intelligence. The Lancet Psychiatry, London, v. 9, n. 11, p. e48, 2022. DOI: https://doi.org/10.1016/S2215-0366(22)00312-1

LAMBRECHT, Anja; TUCKER, Catherine. Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science, [s. l.], v. 65, n. 7, p. 2966-2981, 2019. DOI: https://doi.org/10.1287/mnsc.2018.3093

LARSON, Jeff et al. How We Analyzed the Compas Recidivism Algorithm. ProPublica, [s. l.], 2016. Disponível em: www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. Acesso em: 1 jul. 2024.

LOBO, Paula; CABECINHAS, Rosa. The negotiation of meanings in the evening news: towards an understanding of gender disadvantages in the access to the public debate. International Communication Gazette, Thousand Oaks, v. 72, n. 4-5, p.339-358, 2010. DOI: https://doi.org/10.1177/1748048510362611

LOPES, Maria Margaret. Sobre convenções em torno de argumentos de autoridade. Cadernos Pagu, São Paulo, n. 27, p. 35-61, 2006. DOI: https://doi.org/10.1590/S0104-83332006000200004

LUCY, Li; BAMMAN, David. Gender and representation bias in GPT-3 generated stories. Proceedings of the 3rd Workshop on Narrative Understanding, Long Beach, p. 48-55, 2021. DOI: https://doi.org/10.18653/v1/2021.nuse-1.5

MARTINEZ, A. R. Representation matters: theorizing health communication from the flesh. Health Communication, Abingdon, v. 38, n. 1, p. 184-190, 2021. DOI: https://doi.org/10.1080/10410236.2021.1950293

MASSARANI, Luisa; CASTELFRANCHI, Yurij; PEDREIRA, Anna Elisa. Cientistas na TV: como homens e mulheres da ciência são representados no Jornal Nacional e no Fantástico. Cadernos Pagu, São Paulo, n. 56, p. 1-34, 2019. DOI: https://doi.org/10.1590/18094449201900560015

MASSARANI, Luisa et al. O que os jovens brasileiros pensam da ciência e da tecnologia. Resumo executivo. Rio de Janeiro: INCT-CPCT, 2019.

MCCARTHY, John et al. A proposal for the dartmouth summer research project on artificial intelligence. August 31, 1955. AI Magazine, Washington, D.C., v. 27, n. 4, p. 12-14, 2006. DOI: https://doi.org/10.1609/aimag.v27i4.1904

NEIVA, Silmara Cássia Pereira Couto et al. Perspectivas da ciência brasileira: um estudo sobre a distribuição de bolsas de pesquisa em produtividade do CNPq ao longo do ano de 2019. Revista Interdisciplinar Científica Aplicada, Blumenau, v. 16, n. 3, p. 51-71, 2022. Disponível em: https://portaldeperiodicos.animaeducacao.com.br/index.php/rica/article/view/18090. Acesso em: 1 jul. 2024.

RAMESH, Aditya et al. Zero-shot text-to-image generation. Proceedings of the 38th International Conference on Machine Learning, Long Beach, v. 139, p. 8821-8831, 2021. Disponível em: https://proceedings.mlr.press/v139/ramesh21a.html?ref=journey-matters. Acesso em: 1 jul. 2024.

REZNIK, Gabriela; MASSARANI, Luisa Medeiros; RAMALHO, Marina; MALCHER, Maria A.; AMORIM, Luis; CASTELFRANCHI, Yurij. Como adolescentes apreendem a ciência e a profissão de cientista? Revista Estudos Feministas, Florianópolis, v. 25, n. 2, p. 829-855, 2017. DOI: https://doi.org/10.1590/1806-9584.2017v25n2p829

REZNIK, Gabriela; MASSARANI, Luisa Medeiros; MOREIRA I de C. Como a imagem de cientista aparece em curtas de animação? História, Ciências, Saúde, Manguinhos, v. 26, n. 3, p. 753–777, jul. 2019. DOI: https://doi.org/10.1590/S0104-59702019000300003

SALINAS, Abel et al. The unequal opportunities of large language models: examining demographic biases in job recommendations by ChatGPT and LLaMA. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, Long Beach, n. 34, 2023. DOI: https://doi.org/10.1145/3617694.3623257

SARKAR, Sujan. Uncovering the ai industry: 50 most visited ai tools and their 24B+ traffic behavior. Writerbuddy, [s. l.], 2023. Disponível em: https://writerbuddy.ai/blog/ai-industry-analysis. Acesso em: 1 jul. 2024.

SCHIENBINGER, Londa. O feminismo mudou a ciência? Bauru: Edusc, 2001.

SCHULMAN, John et al. Introducing ChatGPT. OpenAI, [s. l.], 2022. Disponível em: https://openai.com/blog/chatgpt#OpenAI. Acesso em: 1 jul. 2024.

SCHWARCZ, L. K. M. O espetáculo das raças: cientistas, instituições e questão racial no Brasil: 1870-1930. São Paulo: Companhia das Letras, 1993.

SICHMAN, Jaime Simão. Inteligência Artificial e sociedade: avanços e riscos. Estudos Avançados, São Paulo, v. 35, n. 101, p. 37-50, 2021. DOI: https://doi.org/10.1590/s0103-4014.2021.35101.004

TAMBKE, Erika. Mulheres Brasil 40º: os estereótipos das mulheres brasileiras em Londres. Espaço e Cultura, Rio de Janeiro, n. 34, p. 123-150, 2013. Disponível em: https://www.e-publicacoes.uerj.br/espacoecultura/article/view/12744. Acesso em: 1 jul. 2024.

TEIXEIRA, Pedro. ChatGPT reforça estereótipos sobre mulheres brasileiras: magras, bronzeadas e com acessórios coloridos. Folha de S. Paulo, São Paulo, 6 mar. 2024. Disponível em: https://www1.folha.uol.com.br/tec/2024/03/chatgpt-reforca-estereotipos-sobre-mulheres-brasileiras-magras-bronzeadas-e-com-acessorios-coloridos.shtml. Acesso em: 1 jul. 2024.

VASWANI, Ashish et al. Attention Is All You Need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, p. 6000-6010, 2017.

VYAS, Darshali A. et al. Hidden in plain sight: reconsidering the use of race correction in clinical algorithms. New England Journal of Medicine, New England, v. 383, n. 9, p. 874-882, 2020. DOI: https://doi.org/10.1056/nejmms2004740

ZACK, Travis et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. The Lancet Digital Health, London, v. 6, n. 1, p. e12-e22, 2024. DOI: https://doi.org/10.1016/s2589-7500(23)00225-x

Published

2024-07-17

Issue

Section

Dossiê Do analógico à inteligência artificial: 30 anos de Comunicação & Educação

How to Cite

Neves, L. F. F., Medeiros, A., & Massarani, L. (2024). Between continuities and ruptures: the representation of scientists and science based on images generated by ChatGPT. Comunicação & Educação, 29(1), 127-146. https://doi.org/10.11606/issn.2316-9125.v29i1p127-146