FAiR: A framework for analyses and evaluations on recommender systems
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 10962 LNCS, Page: 383-397
2018
- 10Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures10
- Readers10
- 10
Conference Paper Description
Recommender systems (RSs) have become essential tools in e-commerce applications, helping users in the decision-making process. Evaluation on these tools is, however, a major divergence point nowadays, since there is no consensus regarding which metrics are necessary to consolidate new RSs. For this reason, distinct frameworks have been developed to ease the deployment of RSs in research and/or production environments. In the present work, we perform an extensive study of the most popular evaluation metrics, organizing them into three groups: Effectiveness-based, Complementary Dimensions of Quality and Domain Profiling. Further, we consolidate a framework named FAiR to help researchers in evaluating their RSs using these metrics, besides identifying the characteristics of data collections that may intrinsically affect RSs performance. FAiR is compatible with the output format of the main existing RSs libraries (i.e., MyMediaLite and LensKit).
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85049885450&origin=inward; http://dx.doi.org/10.1007/978-3-319-95168-3_26; http://link.springer.com/10.1007/978-3-319-95168-3_26; http://link.springer.com/content/pdf/10.1007/978-3-319-95168-3_26; https://dx.doi.org/10.1007/978-3-319-95168-3_26; https://link.springer.com/chapter/10.1007/978-3-319-95168-3_26
Springer Nature
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know