Profiling and predicting the cumulative helpfulness (Quality) of crowd-sourced reviews
Information (Switzerland), ISSN: 2078-2489, Vol: 10, Issue: 10
2019
- 11Citations
- 252Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
With easy access to the Internet and the popularity of online review platforms, the volume of crowd-sourced reviews is continuously rising. Many studies have acknowledged the importance of reviews in making purchase decisions. The consumer's feedback plays a vital role in the success or failure of a business. The number of studies on predicting helpfulness and ranking reviews is increasing due to the increasing importance of reviews. However, previous studies have mainly focused on predicting helpfulness of "reviews" and "reviewer". This study aimed to profile cumulative helpfulness received by a business and then use it for business ranking. The reliability of proposed cumulative helpfulness for ranking was illustrated using a dataset of 1,92,606 businesses from Yelp.com. Seven business and four reviewer features were identified to predict cumulative helpfulness using Linear Regression (LNR), Gradient Boosting (GB), and Neural Network (NNet). The dataset was subdivided into 12 datasets based on business categories to predict the cumulative helpfulness. The results reported that business features, including star rating, review count and days since the last review are the most important features among all business categories. Moreover, using reviewer features along with business features improves the prediction performance for seven datasets. Lastly, the implications of this study are discussed for researchers, review platforms and businesses.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know