When to stop — A cardinal secretary search experiment
Journal of Mathematical Psychology, ISSN: 0022-2496, Vol: 98, Page: 102425
2020
- 3Citations
- 5Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The cardinal secretary search problem confronts the decision maker with more or less candidates who have identically and independently distributed values and appear successively in a random order without recall of earlier candidates. Its benchmark solution implies monotonically decreasing sequences of optimal value aspirations (acceptance thresholds) for any number of remaining candidates. We compare experimentally observed aspirations with optimal ones for different numbers of (remaining) candidates and methods of experimental choice elicitation: “hot” collects play data, “warm” asks for an acceptance threshold before confronting the next candidate, and “cold” for a complete profile of trial-specific acceptance thresholds. The initially available number of candidates varies across elicitation methods to obtain more balanced data. We find that actual search differs from benchmark behavior, in average search length and success, but also in some puzzling qualitative aspects.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0022249620300742; http://dx.doi.org/10.1016/j.jmp.2020.102425; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85087912163&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0022249620300742; https://api.elsevier.com/content/article/PII:S0022249620300742?httpAccept=text/xml; https://api.elsevier.com/content/article/PII:S0022249620300742?httpAccept=text/plain; https://dul.usage.elsevier.com/doi/; https://dx.doi.org/10.1016/j.jmp.2020.102425
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know