Evaluating a Century of Progress on the Cognitive Science of Adjective Ordering
Transactions of the Association for Computational Linguistics, ISSN: 2307-387X, Vol: 11, Page: 1185-1200
2023
- 1Citations
- 6Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The literature on adjective ordering abounds with proposals meant to account for why cer-tain adjectives appear before others in multi-adjective strings (e.g., the small brown box). However, these proposals have been devel-oped and tested primarily in isolation and based on English; few researchers have looked at the combined performance of multiple fac-tors in the determination of adjective order, and few have evaluated predictors across multiple languages. The current work approaches both of these objectives by using technologies and datasets from natural language processing to look at the combined performance of existing proposals across 32 languages. Comparing this performance with both random and ideal-ized baselines, we show that the literature on adjective ordering has made significant mean-ingful progress across its many decades, but there remains quite a gap yet to be explained.
Bibliographic Details
MIT Press
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know