Linguistic Analysis of the News Coverage of Artificial Intelligence
https://doi.org/10.31652/Published 2025-12-24
Keywords
- media, news, headlines, artificial intelligence, critical discourse analysis, discursive legitimation

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Abstract
The coverage of artificial intelligence (AI) in the news plays a significant role in the public perception of AI and the regulation of this technology. This portrayal is especially critical nowadays, since AI is increasingly used in numerous areas. It comes as no surprise that studies on media coverage of AI have recently become widespread. In particular, scientists use content analysis, frame analysis, text tone analysis, and other methods that allow them to research a large volume of texts and determine whether AI is represented positively or negatively. At the same time, few studies examine the representation of AI from a linguistic perspective, which explains the relevance of this article's topic. The purpose of this study is to analyze the tactics of discursive (de)legitimization of AI used in news headlines. The approach of critical discourse analysis, namely the methodology of T. van Leeuwen, was applied because representation in the media can affect the legitimation or delegitimation of AI usage. The research data were headlines of TSN, Ukrainska Pravda, and UNIAN articles displayed on the «artificial intelligence» search query and published from August 2024 to July 2025. The objectives of the study were to outline the main trends and narratives of the representation of artificial intelligence in the foreign press, determine tactics of discursive (de)legitimization of AI used in Ukrainian news headlines, and examine which tactics and for which aim are used the most. Examination of scientific articles on the portrayal of AI in the media proved the idea of negative representation of AI wrong. According to research, the prevailing attitude is optimistic or neutral. However, as a few studies have demonstrated, the media mentions the risks of AI more and more often. The analysis of Ukrainian news has revealed that 92 headlines have strategies that legitimise AI, and 122 headlines contain delegitimation strategies. Usually, the use of AI is legitimized by indicating the positive consequences of its activities (tactics of result and means orientation), its use by influential countries or institutions (a tactic of positional authority), comparing AI with human intelligence (a tactic of analogy) and using adjectives, nouns and verbs with positive connotations to denote the actions of AI (tactics of evaluation and abstraction). The delegitimation of AI is implemented in the media headlines by reporting the negative consequences of its activities or the illegitimate purposes for which it is used (tactics of result, effect, and means orientation), as well as the use of nouns, verbs, adjectives, and adverbs with negative connotations to describe AI's actions (tactics of abstraction and evaluation). It was also noted that AI is often represented in the media as an agent, which can belittle the role of those who use AI (in the case of reporting achievements) and shift responsibility for negative consequences from people to AI, portraying certain outcomes as inevitable or uncontrollable, making the headlines sensational.
Downloads
References
- Bennett, S. (2022). Mythopoetic Legitimation and the Recontextualisation of Europe's Foundational Myth. Journal of Language and Politics, vol. 21, issue 2, pp. 370–389. https://doi.org/10.1075/jlp.21070.ben (in English).
- Björkvall, A., Nyström Höög, C. (2019). Legitimation of value practices, value texts, and core values at public authorities. Discourse & Communication, vol. 13, issue 4, pp. 398–414. https://doi.org/10.1177/1750481319842457 (in English).
- Brennen, J. S., Howard, P. N., Nielsen, R. (2018). An industry-led debate: how UK media cover artificial intelligence. Oxford: Reuters Institute for the Study of Journalism. DOI: 10.60625/risj-v219-d676 (in English).
- Bunz, M., Braghieri, M. (2022). The AI doctor will see you now: assessing the framing of AI in news coverage. AI & Society, vol. 37, pp. 9–22. https://doi.org/10.1007/s00146-021-01145-9 (in English).
- Cap, P. (2008). Towards the proximization model of the analysis of legitimization in political discourse. Journal of Pragmatics, vol. 40, issue 1, pp. 17–41. https://doi.org/10.1016/j.pragma.2007.10.002 (in English).
- Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. London: The Royal Society. https://doi.org/10.17863/CAM.34502 (in English).
- Cave, S., Coughlan, K., Dihal, K. (2019). «Scary Robots»: Examining Public Responses to AI. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 331–337. https://doi.org/10.1145/3306618.3314232 (in English).
- Cave, S., Dihal, K. (2019). Hopes and Fears for Intelligent Machines in Fiction and Reality. Nature Machine Intelligence, issue 1, pp. 74–78. https://doi.org/10.1038/s42256-019-0020-9 (in English).
- Chadiuk, M. O. (2023). Discursive strategies of legitimation and delegitimation in news texts. (PhD thesis). Kyiv, 286 p. (in Ukrainian).
- Cheremnykh, I. (2024). Artificial intelligence in the media industry and media education. Main challenges and competitive advantages. Communications and communicative technologies, no. 24, pp. 123–132. DOI: 10.15421/292413 (in Ukrainian).
- Chuan, C., Tsai, W. S., Cho, S. (2019). Framing Artificial Intelligence in American Newspapers. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 339–344. https://doi.org/10.1145/3306618.3314285 (in English).
- Fast, E., Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1., pp. 963–969. https://doi.org/10.1609/aaai.v31i1.10635 (in English).
- Fedoriv, Y., Pirozhenko, I., Shuhai, A. (2023). Linguistic Analysis of Human- and AI-Created Content in Academic Discourse. Journal of Vasyl Stefanyk Precarpathian National University. Philology, vol. 10, pp. 47–67. https://doi.org/10.15330/jpnuphil.10.47-67 (in English).
- Fleisig, E., Smith, G., Bossi, M., Rustagi, I., Yin, X., Klein, D. (2024). Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 13541–13564. DOI:10.18653/v1/2024.emnlp-main.750 (in English).
- González-Arias, C., López-García, X. (2024). Rethinking the Relation between Media and Their Audience: The Discursive Construction of the Risk of Artificial Intelligence in the Press of Belgium, France, Portugal, and Spain. Journalism and Media, vol. 5, issue 3, pp. 1023–1037. https://doi.org/10.3390/journalmedia5030065 (in English).
- Hansson, S., Page, R. (2022). Corpus-assisted analysis of legitimation strategies in government social media communication. Discourse & Communication, vol. 16, issue 5, pp. 551–571. https://doi.org/10.1177/17504813221099202 (in English).
- Hart, C. (2011). Legitimizing assertions and the logico-rhetorical module: Evidence and epistemic vigilance in media discourse on immigration. Discourse Studies, vol. 13, no. 6, pp. 751–769. https://doi.org/10.1177/1461445611421360 (in English).
- Inwood, O., Zappavigna, M. (2024). The legitimation of screenshots as visual evidence in social media: YouTube videos spreading misinformation and disinformation. Visual Communication. URL: https://journals.sagepub.com/doi/full/10.1177/14703572241255664. https://doi.org/10.1177/14703572241255664 (in English).
- Kuznetsova, O. (2024). Features of Russian disinformation created by AI on the Internet media, social networks. Bulletin of Lviv Polytechnic National University: journalism, no. 1(7), pp. 79–89. https://doi.org/10.23939/sjs2024.01.079 (in Ukrainian).
- Lammar, D., Horst, M., Müller, R. (2025). AI in the German Media: Narratives of AI-in-Particular and AI-in-General in German Media Reporting About Artificial Intelligence. Digital Journalism, pp. 1–19. https://doi.org/10.1080/21670811.2025.2493759 (in English).
- Lehominova, S., Tyshchenko, V., Nedodai, M., Diachuk, O., Kapeliushna, T. (2024). Artificial intelligence and social networks: approaches to detecting fake information. Telecommunication and information technologies, no. 4 (85), pp. 83–89. DOI: 10.31673/2412-4338.2024.044748 (in Ukrainian).
- Mackay, R. R. (2015). Multimodal legitimation: Selling Scottish independence. Discourse & Society, vol. 26, no. 3, pp. 323–348. https://doi.org/10.1177/0957926514564737 (in English).
- Marketing Media Review. (2024). What Ukrainian media wrote about artificial intelligence in 2023: research. Available at: https://mmr.ua/shho-pysaly-pro-shtuchnyj-intelekt-ukrayinski-media-v-2023-roczi-doslidzhennya (in Ukrainian).
- Mashkova, Ya., Romaniuk, O. (2025). The Institute of Mass Information analysts record a slight decrease in online media traffic. Similarweb data analysis. Available at: https://imi.org.ua/monitorings/analityky-imi-fiksuyut-neznachne-prosidannya-trafiku-onlajn-media-analiz-danyh-similarweb-i69051 (26.08.2025) (in Ukrainian).
- Newman, N., Ross Arguedas, A., Robertson, C. T., Nielsen, R. K., Fletcher, R. (2025). Digital news report 2025. Oxford: Reuters Institute for the Study of Journalism. DOI: 10.60625/risj-8qqf-jt36 (in English).
- Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, vol. 12, issue 2, pp. 1–30. https://doi.org/10.14763/2023.2.1708 (in English).
- Ouchchy, L., Coin, A., Dubljević, V. (2020). AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & Society, vol. 35, pp. 927–936. https://doi.org/10.1007/s00146-020-00965-5 (in English).
- Reyes, A. (2011). Strategies of legitimization in political discourse: From words to actions. Discourse & Society, vol. 22, no. 6, pp. 781–807. https://doi.org/10.1177/0957926511419927 (in English).
- Rojo, L. M., van Dijk, T. A. (1997). «There was a Problem, and it was Solved!»: Legitimating the Expulsion of «Illegal» Migrants in Spanish Parliamentary Discourse. Discourse & Society, vol. 8, issue 4, pp. 523–566. https://doi.org/10.1177/0957926597008004005 (in English).
- Ross, A. S., Rivers, D. J. (2017). Digital cultures of political participation: internet memes and the discursive delegitimization of the 2016 U.S presidential candidates. Discourse, Context and Media, vol. 16, pp. 1–11. https://doi.org/10.1016/j.dcm.2017.01.001 (in English).
- Rudenko, N. V. (2022). Suggestion as a means of public opinion shaping in modern English-language internet editions: information and communication strategies and ways to implement them. (PhD thesis). Sumy, 235 p. (in Ukrainian).
- Sartori, L., Bocca, G. (2023). Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI & Society, vol. 38, pp. 443–458. https://doi.org/10.1007/s00146-022-01422-1 (in English).
- Shevko, D. H. (2020). Legitimation of aggression against Ukraine in Russian official discourse. Strategic panorama, no. 1–2, pp. 48–57. (in Ukrainian).
- Sirinok-Dolharova, K. H. (2012). Global News Discourse: Trends in the Functioning of English-Language Internet Media. Kyiv and Zaporizhzhia, Center for Free Press; Zaporizhzhia National University. (in Ukrainian).
- Ukrainian media, attitudes and trust in 2024. USAID-Internews survey on media consumption (2024). Available at: https://drive.google.com/file/d/1kwsclr3Qm2QaqaIVv0_l4saWRFY_NXWb/view (in Ukrainian).
- Vaara, E. (2014). Struggles over legitimacy in the Eurozone crisis: Discursive legitimation strategies and their ideological underpinnings. Discourse & Society, vol. 25, no. 4, pp. 500–518. https://doi.org/10.1177/0957926514536962 (in English).
- van Leeuwen, T. (2008). Discourse and Practice: New Tools for Critical Discourse Analysis. Oxford : Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195323306.001.0001 (in English).
- Weber, K. (2025). Public images of artificial intelligence: an overview. Media – Culture – Social Communication, no. 21, pp. 9–24. https://doi.org/10.31648/mcsc.10343 (in English).
- Zhu, Y., McKenna, B. (2012). Legitimating a Chinese takeover of an Australian iconic firm: Revisiting models of media discourse of legitimacy. Discourse & Society, vol. 23, no. 5, pp. 525–552. https://doi.org/10.1177/0957926512452971 (in English).
