Dynamic Feed-Forward LSTM
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 14117 LNAI, Page: 191-202
2023
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Conference Paper Description
We address the insufficient hidden states capabilities and single-direction feeding flaws of existing LSTM caused by its horizontal recurrent steps. To this end, we propose the Dynamic Feed-Forward LSTM (D-LSTM). Specifically, our D-LSTM first expands the capabilities of hidden states by assigning an exclusive state vector to each word. Then, the Dynamic Additive Attention (DAA) method is utilized to adaptively compress local context words into a fixed size vector. Last, a vertical feed-forward process is proposed to search context relations by filtering informative features in the compressed context vector and updating hidden states. With the help of exclusive hidden states, each word can preserve its most correlated context features and hidden states do not interfere with each other. By setting an appropriate context window size for DAA and stacking multiple such layers, the context scope can be gradually expanded from a central word to both sides and achieve the whole sentence at the top layer. Furthermore, the D-LSTM module is compatible with parallel computing and amenable to training via back-propagation for its vertical prorogation. Experimental results on both classification and sequence tagging datasets insist that our models achieve competitive performance compared to existing LSTMs.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85172298066&origin=inward; http://dx.doi.org/10.1007/978-3-031-40283-8_17; https://link.springer.com/10.1007/978-3-031-40283-8_17; https://dx.doi.org/10.1007/978-3-031-40283-8_17; https://link.springer.com/chapter/10.1007/978-3-031-40283-8_17
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know