The Architecture of Digital Deception: Mapping Fake News Typologies, Bot Behaviors, and Platform Vulnerabilities
How to cite (IJASCE) :
The emergence of social media has radically changed the nature of information production, dissemination, and consumption. Alongside its advantages, the diffusion of misinformation has become a major threat to debate, democratic processes, and social cohesion. The following paper presents an extensive review of typologies of fake news, drawing on existing scholarship that categorizes fake news as satire, propaganda, disinformation, misinformation, manipulation, rumor, crowdturfing, hate speech, spam, trolls, and cyberbullying. The two categories are discussed with respect to their purpose, precision, and influence on users. The role of bots and computational propaganda, which automate and amplify the spread of misleading content on the internet, particularly during sensitive periods when politics is salient, is also examined. The paper identifies several shortcomings of existing platform moderation systems, which largely fail to block the real-time dissemination of dangerous content. In a reaction, the paper highlights the important work of information professionals, i.e., journalists, teachers, educators, librarians, and specialists in digital media, being able to reduce the dissemination of false information. They are tasked with fact-checking, source validation, media literacy, and citizen empowerment in the assessment of online information. In addition, the paper promotes the development of more resilient AI-based detection mechanisms that can respond quickly to the proliferation of harmful content. Finally, the research is expected to foster a more aware and less vulnerable world, prepared to meet the challenges of the digital information era through technological devices and human knowledge.
J. Albright, "The #Election2016 micro-propaganda machine," Medium, 2020. [Online]. Available: https://medium.com/@d1gi/the-election2016micro-propaganda-machine-383449cc1fba
G. Di Domenico, D. Nunan, J. Sit, and V. Pitardi, "Free but fake speech: When giving primacy to the source decreases misinformation sharing on social media," Psychol. Mark., vol. 38, no. 10, pp. 1700–1711, 2021, doi: 10.1002/mar.21479.
S. Ullmann and M. Tomalin, "Quarantining online hate speech: Technical and ethical perspectives," Ethics Inf. Technol., vol. 22, no. 1, pp. 69–80, 2020, doi: 10.1007/s10676-019-09516-z.
E. C. Tandoc Jr., Z. W. Lim, and R. Ling, "Defining 'fake news': A typology of scholarly definitions," Digit. Journal., vol. 6, no. 2, pp. 137–153, 2018, doi: 10.1080/21670811.2017.1360143.
C. Wardle, "Fake news. It's complicated," First Draft, Feb. 16, 2017. [Online]. Available: https://firstdraftnews.org/articles/fake-news-complicated/.
M. Haque, "'Fake news' in social media: Conceptualizing, detection and finding ways of prevention," Nirikkha, vol. 223, pp. 9–18, Jun. 2019.
N. Ouedraogo, "Social media literacy in crisis context: Fake news consumption during COVID-19 lockdown," SSRN Electron. J., 2020, doi: 10.2139/ssrn.3641230.
S. A. Khan, M. H. Alkawaz, and H. M. Zangana, "The use and abuse of social media for spreading fake news," in Proc. IEEE Int. Conf. Autom. Control Intell. Syst. (I2CACIS), Shah Alam, Malaysia, Jun. 2019, pp. 324–329, doi: 10.1109/I2CACIS.2019.8825081.
E. C. Tandoc Jr., D. Lim, and R. Ling, "Diffusion of disinformation: How social media users respond to fake news and why," Journalism, vol. 21, no. 3, pp. 381–398, 2020, doi: 10.1177/1464884919868325.
R. K. Nielsen and L. Graves, "Audience perspectives on fake news," Reuters Inst. Study Journal., Oxford, U.K., Factsheet, Oct. 2017. [Online]. Available: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen%26Graves_factsheet_1710v3_FINAL_download.pdf.
J. Bollen, H. Mao, and A. Pepe, "Determining the public mood state by analysis of microblogging posts," in Proc. 12th Int. Conf. Artif. Life (ALIFE), Odense, Denmark, Aug. 2010, pp. 667–668.
C. Castillo, M. Mendoza, and B. Poblete, "Information credibility on Twitter," in Proc. 20th Int. Conf. World Wide Web (WWW), Hyderabad, India, Mar. 2011, pp. 675–684, doi:10.1145/1963405.1963500.
K. Sharma, F. Qian, H. Jiang, N. Ruchansky, M. Zhang, and Y. Liu, "Combating fake news: A survey on identification and mitigation techniques," ACM Trans. Intell. Syst. Technol., vol. 10, no. 3, pp. 1–42, May 2019, doi: 10.1145/3305260.
K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, "Fake news detection on social media: A data mining perspective," ACM SIGKDD Explor. Newslett., vol. 19, no. 1, pp. 22–36, Jun. 2017, doi:10.1145/3137597.3137600.
C. Castillo, M. Mendoza, and B. Poblete, "Predicting information credibility in time-sensitive social media," Internet Res., vol. 23, no. 5, pp. 560–588, 2013, doi: 10.1108/IntR-05-2012-0095.
Z. Chu, S. Gianvecchio, H. Wang, and S. Jajodia, "Who is tweeting on Twitter: Human, bot, or cyborg?," in Proc. 26th Annu. Comput. Secur. Appl. Conf. (ACSAC), Austin, TX, USA, Dec. 2010, pp. 21–30, doi:10.1145/1920261.1920265.
A. Friggeri, L. A. Adamic, D. Eckles, and J. Cheng, "Rumor cascades," in Proc. 8th Int. AAAI Conf. Web Social Media (ICWSM), Ann Arbor, MI, USA, Jun. 2014, pp. 101–110, doi: 10.1609/icwsm.v8i1.14559.
J. Gao et al., "On community outliers and their efficient detection in information networks," in Proc. 16th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining (KDD), Washington, DC, USA, Jul. 2010, pp. 813–822, doi: 10.1145/1835804.1835907.
G. B. Guacho, S. Abdali, N. Shah, and E. E. Papalexakis, "Semi-supervised content-based detection of misinformation via tensor embeddings," in Proc. IEEE/ACM Int. Conf. Adv. Social Netw. Anal. Mining (ASONAM), Barcelona, Spain, Aug. 2018, pp. 322–325, doi:10.1109/ASONAM.2018.8508241.
A. Gupta and P. Kumaraguru, "Credibility ranking of tweets during high impact events," in Proc. 1st Workshop Privacy Secur. Online Social Media, Lyon, France, Apr. 2012, p. 2, doi:10.1145/2185354.2185356.
T. A. Nguyen, T. C. Bui, M. Dudareva, and V. Bubnov, "Correlation between the world's social media usage and political stability in a country," Public Organ. Rev., vol. 24, no. 1, pp. 217–233, 2024, doi:10.1007/s11115-023-00744-y.
E. Aïmeur, S. Amri, and G. Brassard, "Fake news, disinformation and misinformation in social media: A review," Social Netw. Anal. Mining, vol. 13, no. 1, p. 30, 2023, doi: 10.1007/s13278-023-01028-5.
M. Kolomeets, A. Chechulin, and I. Kotenko, "Bot detection by friends graph in social networks," J. Wireless Mobile Netw. Ubiquitous Comput. Dependable Appl., vol. 12, no. 2, pp. 141–159, 2021, doi:10.22667/JOWUA.2021.06.30.141.
M. Aljabri et al., "Machine learning-based social media bot detection: A comprehensive literature review," Social Netw. Anal. Mining, vol. 13, no. 1, p. 72, 2023, doi: 10.1007/s13278-022-01020-5.
M. Pote, "Computational propaganda theory and bot detection system: Critical literature review," arXiv:2404.05240, 2024. [Online]. Available: https://arxiv.org/abs/2404.05240.
M. Orabi, D. Mouheb, Z. Al Aghbari, and I. Kamel, "Detection of bots in social media: A systematic review," Inf. Process. Manag., vol. 57, no. 4, p. 102250, 2020, doi: 10.1016/j.ipm.2020.102250.
S. Cresci, M. Petrocchi, A. Spognardi, and S. Tognazzi, "On the capability of evolved spambots to evade detection via genetic engineering," Online Social Netw. Media, vol. 9, pp. 1–16, 2019, doi:10.1016/j.osnem.2018.10.005.
M. Mendoza, E. Providel, M. Santos, and S. Valenzuela, "Detection and impact estimation of social bots in the Chilean Twitter network," Sci. Rep., vol. 14, no. 1, p. 6525, 2024, doi: 10.1038/s41598-024-57227-3.
J. Wihbey, "Journalists' use of knowledge in an online world," Journalism Pract., vol. 11, no. 10, pp. 1267–1282, 2017, doi:10.1080/17512786.2016.1255149.
X. Zhang and W. Li, "From social media with news: Journalists' social media use for sourcing and verification," Journalism Pract., vol. 14, no. 10, pp. 1193–1210, 2020, doi: 10.1080/17512786.2019.1665523.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.