An experiment in intelligent text processing.
2001
- 215Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage215
- Abstract Views143
- Downloads72
Thesis / Dissertation Description
Traditional approaches to information retrieval (IR) are statistical in nature. These approaches depend on data gathered from corpus analysis. As much as such approaches provide for immediately practical computational models in IR, they exhibit a maximal accuracy of 40% when applied to open-ended corpus. Language-based approaches, on the other hand, provide for more intuitive solutions to text retrieval with higher precision accuracy. The problem remains in that, these approaches require natural language understanding (NLU), and we do not have language understanding as of yet. It is commonly accepted that we require vast amounts of commonsense knowledge to attempt problems in NLU. It is the lack of such proper/complete knowledge structures to represent commonsense knowledge, that has left intelligent IR a little short of being abandoned. In this thesis, we emphasize that complete NLU is not necessary for intelligent IR, as we do not have to understand the text completely. It seems that discovering the aboutness of text is sufficient to perform intelligent IR with precision accuracy that is far better than the traditional approaches. We prove this thesis existentially by implementing Digital Agora: An intelligent IR system that indexes/retrieves text based on subject content. Although we were successful in identifying that intelligent IR based on conceptual analysis is possible without complete NLU, we observed that the rather inefficient computational complexity that such an approach demands for, makes it impractical. Identifying this complexity bottle-neck to that of lexical disambiguation, we implement, test and present initial results of a computational model based on the Formal Ontology that attempts parallel marker-propagation at lexical disambiguation. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2001 .R34. Source: Masters Abstracts International, Volume: 42-03, page: 0972. Advisers: Walid Saba; Robert D. Kent. Thesis (M.Sc.)--University of Windsor (Canada), 2001.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know