Commentary

Gendered disinformation as violence: A new analytical agenda

Article Metrics
CrossRef

0

CrossRef Citations

48

PDF Downloads

429

Page Views

The potential for harm entrenched in mis- and disinformation content, regardless of intentionality, opens space for a new analytical agenda to investigate the weaponization of identity-based features like gender, race, and ethnicity through the lens of violence. Therefore, we lay out the triangle of violence to support new studies aiming to investigate multimedia content, victims, and audiences of false claims. Finally, we define gendered disinformation as the employment of systematic and multidirectional flows of violence through (un)conscious content manipulation, audience engagement, and victim-blaming to prevent women and gender minorities from further political participation.

image by geralt on pixabay

Introduction

Gendered disinformation, the weaponization of stereotypes related to women and LGBTQIA+ individuals in disinformation content, has scarcely been addressed by scholars (Camargo & Simon, 2022) despite the recent pleas to critically examine the phenomenon (Freelon & Wells, 2020; Kuo & Marwick, 2021). Though a few industry and policy-making reports (e.g., Jankowicz et al., 2021), book chapters (e.g., Bardall, 2023), and commentaries (e.g., Veritasia et al., 2024) have addressed the overall issue, essential questions deserve further investigation. Technological advances such as generative artificial intelligence (gen-AI) merit further attention as a form of gendered harm, as false and stereotypical claims may be enhanced in manipulated images, including sexually explicit deepfakes (Rodriguez & Mithani, 2024), which may discourage women and gender minorities from public life and political participation.

In this commentary, we propose a two-fold analytical agenda to foster new research and policy-making solutions. First, we suggest shifting the focus away from intentionality as a defining element in disinformation by applying the lens of gendered disinformation as violence. Our argument is inspired by decolonial feminism and critical approaches to mis- and disinformation studies, thereby acknowledging the existence of potential for harm (Freelon & Wells, 2020) regardless of intentionality in the distribution of online content in mis- and disinformation narratives. In other words, we contend that violence through discourse is not always inflicted consciously. With this, we aim to move forward and abandon the frequently acknowledged “malign actor” of disinformation. Second, we offer an analytical framework to study the phenomenon as a triangle of violence (Figure 1). Our approach addresses the multiple angles from which violence flows travel among the three vertices: multimedia digital content (i.e., the combination of text, audio, picture, and video) and its creators who enact violence; victims, who are targeted in false and stereotypical or misogynistic claims and suffer from violence; and audiences, who witness and sometimes engage with violent behavior and content.

A unified conceptualization

Current definitions of gendered disinformation often overlap with other forms of violence (e.g., hate speech, incivility, political propaganda, harassment, and bullying), leaving the very definition fragmented. Judson et al. (2020) suggest that gendered disinformation “exists at the intersection of disinformation with online violence” (p. 11). Similarly, Jankowicz et al. (2021) add that it encompasses “falsity, malign intent, and coordination” (p. 1). Alternatively, Bardall (2023) calls “gendered disinforming” (p. 113) the means of “weaponizing information” (p. 117) to perpetrate violence against women in politics, considering that gender abuse, and the response to it, shapes political participation (Sobieraj, 2020).

To include identities that frequently intersect with gender—like race, ethnicity, age, class, and religion—specialists, particularly in policy making, prefer concepts such as identity-based disinformation (Bradshaw, 2024), gendered online disinformation (Frau-Meigs & Velez, 2024), or gender and identity disinformation (GID), as used by Artemis Alliance (EU DisinfoLab, 2025). These approaches suggest that, so far, policy-making projects have focused on actors spreading gendered disinformation, such as Russia, in relation to anti-gender narratives (Stolze, 2025). In the rare cases wherein policy initiatives dove further, such as the EU’s Gendered Online Disinformation Policy Brief (Frau-Meigs & Velez, 2024), policy makers highlighted the conceptual fragmentation as an obstacle in engaging platforms and governments.

The attempt to drive gendered minorities out of positions of power reveals a potential connection between gendered disinformation and democratic erosion. In recent German elections, the only female candidate, Annalena Baerbock, was targeted by disinformation campaigns on Facebook more often than her male counterparts (Smirnova et al., 2021). In the United States, the American Sunlight Project found 35,000 mentions of nonconsensual intimate imagery on deepfake websites depicting 25 female members and only one male member of Congress (Rodriguez & Mithani, 2024). In Brazil, one-third of 4,700 YouTube comments during 2024 city-level elections contained personal attacks against women politicians based on intersectional identities, with misogyny present across all content and transphobia and ageism targeting specific individuals (Coelho, 2024).

The study of visual forms of disinformation has lagged further behind, despite recent headway (see Dan et al., 2021; Hameleers, 2025; Weikmann & Lechler, 2023). We call attention to this due to the unique sense of truthfulness attached to visual imagery, with videos and photos deemed a proxy for “evidence” (Brennen et al. 2021). Scholars have placed different levels of image-manipulation on a continuum, ranging from cheap-fakes to deepfakes (Paris & Donovan, 2019): Cheap-fakes are low-sophistication interventions (e.g., authentic content that recirculates out of its context, see Weikman & Lecheler, 2023), while deepfakes involve doctored videos synthetically impersonating someone’s voice and image. Deepfakes are particularly harmful to women and gender minorities in public-facing jobs, who are often targeted in sexualized content (Jankowicz, 2023). Though visual disinformation contributes to the delegitimization of political actors (Hameleers et al., 2024) and is incorporated into digital politics to anchor informationally precarious claims (Amit-Danhi & Aharoni, 2023), its identity-based components are understudied. According to Gehrke & Pasitselska (2024), only 7% of the papers presented in 2024 at two relevant conferences with a strong political communication division had gender and/or race in the titles. Such papers were placed in gender- or race-themed sessions, thus limiting the discussion to specialized scholarly audiences.

Thus, we propose to unify the many definitions by applying the lens of gendered disinformation as violence, defining the phenomenon as the employment of systematic and multidirectional flows of violence (experienced, directed and witnessed) through (un)conscious content manipulation, audience engagement, and victim-blaming to prevent women and gender minorities from (further) political participation.

Approaching gendered disinformation as violence

In an early definition of misinformation and disinformation, Wardle & Derakhshan (2017) categorized both as false and misleading information belonging to the realm of information disorder, differentiated by intent. While misinformation has no intention to cause harm, disinformation is designed to cause it. The conceptualization of disinformation has since gained more layers, now defined as fabricated content designed to achieve a political or economic goal (Tandoc et al., 2018), as well as the role of legacy media in the perpetuation of gender and racial stereotypes and societal inequalities (Kuo & Marwick, 2021). Our approach is informed by Freelon & Wells’s (2020) three defining characteristics of disinformation: deception, potential for harm, and intent to harm. By framing gendered disinformation as violence, we acknowledge that the potential for harm is present across both mis- and disinformation.

Unlike earlier definitions that viewed misinformation as mostly incidental and harmless, particularly in early stages of COVID-19 prevention (Gehrke & Benetti, 2021), our perspective acknowledges that false or harmful content is not always fabricated or shared by a malign actor. In other words, someone does not necessarily fabricate or share false, stereotypical, or misogynistic content with the awareness that it might be experienced as violence or a violent social control mechanism (see Manne, 2018) by the target (e.g., a woman politician) or parts of the audience.

A relevant strand of scholarship locates the origins of social inequalities in colonial exploitation of lands and bodies through violence for wealth acquisition (Sánchez-Acochea, 2021; Vergès, 2020). Genocides, massacres, rapes, and slavery have established hierarchies of domination, creating lasting extractivist economies exploiting environmental resources and peoples of the Global South (Vergès, 2020). Violence remains hierarchical, translating into alarming femicide rates (Segato, 2016; 2021) and into uneven resource distribution for poor and racialized populations. Governments around the world perpetuate social inequalities by selectively punishing or protecting certain groups (Vergès, 2020). With the digital environment functioning as a continuation of land as the primary arena for colonial violence, those same hierarchies are what allows disinformation to thrive, toxically contaminating digital discourses and debates on social media platforms (Recuero, 2024).

Our conceptualization allows gendered disinformation to be defined by the targets and identity-based attributes of the content. That is, the phenomenon’s meaning entails what women and gender minorities experience from it: Perceiving gendered disinformation as violence may prevent them from political participation. Inspired by previous theoretical proposals of violence through the lens of media ecology and platforms (Morales, 2023), our approach adopts Dunn’s (2021) argument that technology-facilitated violence should be viewed as a continuum without separating digital and physical aspects of harmful behaviors. Thus, the violence lens allows us to view gendered disinformation as the weaponization of identity-based features in multidirectional flows of violence.    

A new analytical framework

Our approach views gendered disinformation as a system of violent flows between content (or its creators), victims, and audiences. Through the overarching violence inherent to the societal structure, flows of identity-based disinformation travel around the triangle, thereby highlighting a system enacting, experiencing, and witnessing violence (see Figure 1).

Figure 1. The triangle of violence in gendered disinformation.

To encompass the multifaceted nature of gendered disinformation, the triangle of violence starts with (1) multimedia content and its creators, which expands the content-based approach commonly adopted in disinformation studies. Invoking the concept of media witnessing (Pinchevski, 2009; 2019), it shines a light beyond the creator-target dichotomy and suggests that disinformation folds within it an act of violence or harm mediated by technology linking between content, creators, victims, and audiences. As (1)content is disseminated to (3) audiences, a flow of violence travels to the side of the (2) victims, who experience harm when targeted in fabricated content.

In this system, victimization and harm are not singular or individually-targeted instances, but rather ripple further: (3) Audiences’ consumption of gendered disinformation may place entire groups at the (2) victim edge, as they may experience hurt due to having similar identity-based characteristics, targeted by (1) the content. Thus, roles in our framework are overlapping, fluid, and interchangeable, making the triangulation of role(s) a key part of the application of our framework. The arrows outside the triangle indicate that the flow travels back and forth from one side to the other in a feedback system that responds to certain kinds of input. This is not a closed or singular-event system, as harm and hurt echo in the witnesses and victims, as the content continues to spread across the digital world. By challenging the singular focus on the content in studying disinformation, we highlight the ripples of secondary effects caused by gendered disinformation, thus expanding the violent potential of gendered disinformation beyond content and falsehood into the exploration of systemic violence emblematic of colonial and gendered power structures.

We propose that different projects and policy initiatives, particularly considering the rapid development of AI, may choose to focus on the triangle in its entirety or emphasize a singular type of violence flow within the system. This may open new directions for policy initiatives focused on locating regulatory gaps across the system and providing balance that will moderate or suppress the impact of identity-driven disinformation on gendered, intersectional minorities. For researchers, a mixed-method approach or a system-level methodology is often necessary (see Figure 2). Each of the vertices may be tackled via its own set of tools, according to the research question.

Alongside our proposed methods, we encourage the use of innovative and recent approaches, such as Peng et al.’s (2022) exploration of people’s credibility perception in three pathways, including formats (i.e., photo, video, meme, and data) and two-fold features (i.e., from objective characteristics like color and composition to perceived features such as professional quality and aesthetics) of visual data.

Figure 2. The triangle of violence in gendered disinformation: A methodological and policy outline.

Conclusion

So far, scholars and policy-making specialists have tackled a singular portion of gendered disinformation, namely, either gender or disinformation, or framed this phenomenon in the overlap with other forms of (gender-based) violence, such as hate speech, harassment, and other sorts of abuses. While these works are essential, particularly for policy making, there is room for improvement and innovation in theory and methodology—this is why we structured our contribution in a two-fold proposition.

Our commentary proposes to unify the fragmented concept by framing gendered disinformation as violence. What we want to accomplish with this is to name the identity-based aspect and the potential for harm inherent to this type of fabricated content without treating it as something rare and episodic. This means that intent is not always clear and that violence occurs despite intent. Without a comprehensive approach, audiences might perceive gendered disinformation as something occasional that disrupts society, not as the omnipresent systemic violence that targets primarily women, gender minorities, and people of color.

In this sense, rooting our approach in critical disinformation studies and decolonial feminist theories allows us to claim that violence is systemic and requires a multi-sided approach. We also made the conscientious choice of not differentiating digital and technology-based violence under the risk of downplaying harmful behavior in digital spaces. Therefore, we structured our methodological framework in the triangle of violence, where violence is in the flow format that travels from one point to all vertices.

Topics
Download PDF
Cite this Essay

Gehrke, M., & Amit-Danhi, E.R. (2025). Gendered disinformation as violence: A new analytical agenda. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-177

Bibliography

Amit-Danhi, E.R., & Aharoni, T. (2023). “Seeing” into the future: Anchoring strategies in future-oriented Twitter visuals. First Monday, 28(9). https://doi.org/10.5210/fm.v28i9.12884

Bardall, G. (2022). Nasty, fake, and online: Distinguishing gendered disinformation and violence against women in politics. In G. Haciyakupoglu & Y. Wong (Eds.), Gender and security in digital space: Navigating access, harassment, and disinformation (pp. 109–123). Routledge. https://doi.org/10.4324/9781003261605

Bradshaw, S. (2024, November). Disinformation and identity-based violence. Stanley Center for Peace and Security, School of International Service, American University. https://stanleycenter.org/publications/disinformation-and-identity/

Camargo, C. Q., & Simon, F. M. (2022). Mis- and disinformation studies are too big to fail: Six suggestions for the field’s future. Harvard Kennedy School (HKS) Misinformation Review, 3(5). https://doi.org/10.37016/mr-2020-106

Coelho, G. (2024, September 13). MonitorA expõe misoginia e transfobia contra candidatas em 2024 [MonitorA exposes misogony and transphobia against candidates in 2024]. Azmina. https://azmina.com.br/reportagens/monitora-expoe-misoginia-e-transfobia-contra-candidatas-em-2024

Dan, V., Paris, B., Donovan, J., Hameleers, M., Roozenbeek, J., van der Linden, S., & von Sikorski, C. (2021). Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. https://doi.org/10.1177/10776990211035395

Dunn, S. (2021). Is it actually violence? Framing technology-facilitated abuse as violence. In J. Bailey, A. Flynn, & N. Henry (Eds.), The Emerald international handbook of technology-facilitated violence and abuse (pp. 22–45). Emerald Publishing Limited. https://doi.org/10.1108/978-1-83982-848-520211002

EU DisinfoLab (2025, March 3). Countering gender and identity disinformation – threats and strategies [Video]. YouTube. https://www.youtube.com/watch?v=1lFNBeG7M-Y

Freelon, D., & Wells, C. (2020). Disinformation as political communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755

Frau-Meigs, D., & Velez, I. (2024). Gendered online disinformation policy brief (Bulgaria, France, Greece, Italy). European Union. https://hal.science/hal-04715529v1

Gehrke, M., & Benetti, M. (2021). Disinformation in Brazil during the Covid-19 pandemic: Topics, platforms, and agents. Fronteiras, 23(2), 14-28. https://doi.org/10.4013/fem.2021.232.02

Gehrke, M., & Pasitselska, O. (2024). Disinformation and identity(-based features) in political communication research. Political Communication Report, 30. https://politicalcommunication.org/article/gehrke-pasitselska-disinformation-and-identity/

Hameleers, M. (2025). The nature of visual disinformation online: A qualitative content analysis of alternative and social media in the Netherlands. Political Communication, 42(1), 108–126. https://doi.org/10.1080/10584609.2024.2354389

Hameleers, M., Van Der Meer, T., & Vliegenthart, R. (2024). How persuasive are political cheapfakes disseminated via social media? The effects of out-of-context visual disinformation on message credibility and issue agreement. Information Communication & Society, 28(1), 61–78. https://doi.org/10.1080/1369118x.2024.2388079

Jankowicz, N. (2023, June 25). I shouldn’t have to accept being in a deepfake porn. The Atlantic. https://www.theatlantic.com/ideas/archive/2023/06/deepfake-porn-ai-misinformation/674475/

Jankowicz, N., Hunchack, J., Pavliuc, A., Davies, C., Pierson, S., & Kaufmann, Z. (2021). Malign creativity: How gender, sex, and lies are weaponized against women online. Science and Technology Innovation Program, Wilson Center. https://www.wilsoncenter.org/publication/malign-creativity-how-gender-sex-and-lies-are-weaponized-against-women-online

Judson, E., Atay, A., Krasodomski-Jones, A., Lasko-Skinner, R. & Smith, J. (2020). Engendering hate: The counters of state-aligned gendered disinformation online. Demos. https://demos.co.uk/research/engendering-hate-the-contours-of-state-aligned-gendered-disinformation-online/

Kuo, R., & Marwick, A. (2021). Critical disinformation studies: History, power, and politics. Harvard Kennedy School (HKS) Misinformation Review, 2(4). https://doi.org/10.37016/mr-2020-76

Manne, K. (2018). Down girl: The logic of misogyny. Oxford University Press.

Morales, E. (2023). Ecologies of violence on social media: An exploration of practices, contexts, and grammars of online harm. Social Media + Society, 9(3). https://doi.org/10.1177/20563051231196882

Paris, B., & Donovan, J. (2019). Deepfakes and cheap fakes: The manipulation of audio and visual evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/

Peng, Y., Lu, Y., & Shen, C. (2023). An agenda for studying credibility perceptions of visual misinformation. Political Communication, 40(2), 225–237. https://doi.org/10.1080/10584609.2023.2175398

Pinchevski, A. (2009). Introduction: Why media witnessing? Why now? In P. Frosh & A. Pinchevski (Eds.), Media witnessing: Testimony in the age of mass communication (pp. 1–19). Palgrave Macmillan UK.

Pinchevski, A. (2019). Transmitted wounds: Media and the mediation of trauma. Oxford University Press.

Recuero, R. (2024). The platformization of violence: Toward a concept of discursive toxicity on social media. Social Media + Society, 10(1). https://doi.org/10.1177/20563051231224264

Rodriguez, B., & Mithani, J. (2024, December 11). AI enters Congress: Sexually explicit deepfakes target women lawmakers. The 19th. https://19thnews.org/2024/12/ai-sexually-explicit-deepfakes-target-women-congress/

Sánchez-Acochea, D. (2021). The costs of inequality in Latin America: Lessons and warnings for the rest of the world. London: I.B. Tauris.

Segato, R. (2016). La guerra contra las mujeres [The war against women]. Traficantes de Suenos. https://traficantes.net/sites/default/files/pdfs/map45_segato_web.pdf

Segato, R.L. (2021). Las estructuras elementales de la violencia: Ensayos sobre genero entre la antropologia, el psychoanalysis y los derechos humanos [The elemental structures of violence: Essays on gender from anthropology, psychoanalysis, and human rights]. Prometeo Libros.

Smirnova, J., Winter, H., Mathelemuse, N., Dorn, M., & Schwertheim, H. (2021). Digitale Gewalt und Desinformation gegen Spitzenkandidat: innen vor der Bundestagswahl 2021 [Digital violence and disinformation against top candidates ahead of the 2021 federal election]. Institute for Strategic Dialogue. https://www.isdglobal.org/isd-publications/digitale-gewalt-und-desinformation-gegen-spitzenkandidatinnen-vor-der-bundestagswahl-2021/

Sobieraj, S. (2020). Credible threat: Attacks against women online and the future of democracy. Oxford University Press.

Stolze, M. (2025, February 24). The importance of feminist approaches in tackling (AI-driven) gendered disinformation to counter election interference. The Centre for Feminist Foreign Policy. https://centreforfeministforeignpolicy.org/2025/02/24/the-importance-of-feminist-approaches-in-tackling-ai-driven-gendered-disinformation/

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

Vergès, F. (2020). A feminist theory of violence. Pluto Press.

Veritasia, M. E., Muthmainnah, A. N., & de-Lima-Santos, M. F. (2024). Gendered disinformation: A pernicious threat to equality in the Asia Pacific. Media Asia, 1–9. https://doi.org/10.1080/01296612.2024.2367859

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy-making. Council of Europe. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html

Weikmann, T., & Lecheler, S. (2023). Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society, 25(12), 3696–3713. https://doi.org/10.1177/14614448221141648

Funding

No funding has been received to conduct this research.

Competing Interests

The authors declare no competing interests.

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Authorship

The authors have equally contributed to this manuscript.

Acknowledgements

This commentary is a result of many conversations and rounds of feedback. The authors would like to thank the reviewers and the Editorial Committee of Harvard Kennedy School (HKS) Misinformation Review for their valuable input and service from the submission to the publication date. The authors would also like to express their gratitude to colleagues who were particularly helpful in early stages of this project, namely, (stated in alphabetical order) Clara Iglesias-Keller, Elizaveta Kuznetsova, Esteban Morales, Kate Saner, Marcel J. Broersma, Martha Stolze, Pablo Valdivia, Raingard Esser, and Scott A. Eldridge II.