Alignment and stability of embeddings: Measurement and inference improvement
Neurocomputing, ISSN: 0925-2312, Vol: 553, Page: 126517
2023
- 1Citations
- 9Captures
- 2Mentions
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Representation learning (RL) methods learn objects’ latent embeddings where information is preserved by distance. Since certain distance functions are invariant to certain linear transformations, one may obtain different embeddings while preserving the same information. In dynamic systems, a temporal difference in embeddings may be explained by the stability of the system or by the misalignment of embeddings due to arbitrary transformations. This study focuses on the embedding alignment problem to distinguish structural changes inherent to a system from arbitrary changes caused by representation learning methods, and quantify their magnitudes. In order to avoid any confusion due to the naming conventions in the literature, it should be noted that embedding alignment problems are different from graph matching/network alignment problems. In the representation learning literature, although the embedding alignment issue has been acknowledged, its measurement and empirical analysis have not received sufficient interest. In this work, we explore the embedding alignment and its parts, propose novel metrics to measure alignment and stability, and show their suitability through synthetic experiments. Real-world experiments show that both static and dynamic RL methods are prone to produce misaligned embeddings and such misalignment worsens the performance of dynamic network inference tasks. By ensuring alignment, the prediction accuracy raises by up to 90% in static and up to 40% in dynamic RL methods.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0925231223006409; http://dx.doi.org/10.1016/j.neucom.2023.126517; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85169923881&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0925231223006409; https://dx.doi.org/10.1016/j.neucom.2023.126517
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know