Detecting Fake News Spreaders on Twitter Through Follower Networks
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, ISSN: 1867-822X, Vol: 480 LNICST, Page: 181-195
2023
- 8Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures8
- Readers8
Conference Paper Description
Obtaining news from social media platforms has become increasingly popular due to their ease of access and high speed of information dissemination. These same factors have, however, also increased the range and speed at which misinformation and fake news spread. While machine-run accounts (bots) contribute significantly to the spread of misinformation, human users on these platforms also play a key role in contributing to the spread. Thus, there is a need for an in-depth understanding of the relationship between users and the spread of fake news. This paper proposes a new data-driven metric called User Impact Factor (UIF) aims to show the importance of user content analysis and neighbourhood influence to profile a fake news spreader on Twitter. Tweets and retweets of each user are collected and classified as ‘fake’ or ‘not fake’ using Natural Language Processing (NLP). These labeled posts are combined with data on the number of the user’s followers and retweet potential in order to generate the user’s impact factor. Experiments are performed using data collected from Twitter and the results show the effectiveness of the proposed approach in identifying fake news spreaders.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85163361350&origin=inward; http://dx.doi.org/10.1007/978-3-031-33614-0_13; https://link.springer.com/10.1007/978-3-031-33614-0_13; https://dx.doi.org/10.1007/978-3-031-33614-0_13; https://link.springer.com/chapter/10.1007/978-3-031-33614-0_13
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know